00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2023 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3288 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.103 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.140 Fetching changes from the remote Git repository 00:00:00.141 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.188 > git --version # 'git version 2.39.2' 00:00:00.188 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.214 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.215 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.808 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.820 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.833 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:03.833 > git config core.sparsecheckout # timeout=10 00:00:03.845 > git read-tree -mu HEAD # timeout=10 00:00:03.860 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:03.880 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:03.880 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:03.970 [Pipeline] Start of Pipeline 00:00:03.981 [Pipeline] library 00:00:03.982 Loading library shm_lib@master 00:00:03.982 Library shm_lib@master is cached. Copying from home. 00:00:03.997 [Pipeline] node 00:00:04.008 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.009 [Pipeline] { 00:00:04.019 [Pipeline] catchError 00:00:04.020 [Pipeline] { 00:00:04.031 [Pipeline] wrap 00:00:04.037 [Pipeline] { 00:00:04.044 [Pipeline] stage 00:00:04.046 [Pipeline] { (Prologue) 00:00:04.246 [Pipeline] sh 00:00:04.527 + logger -p user.info -t JENKINS-CI 00:00:04.541 [Pipeline] echo 00:00:04.542 Node: GP8 00:00:04.549 [Pipeline] sh 00:00:04.843 [Pipeline] setCustomBuildProperty 00:00:04.854 [Pipeline] echo 00:00:04.856 Cleanup processes 00:00:04.860 [Pipeline] sh 00:00:05.148 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.148 617032 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.160 [Pipeline] sh 00:00:05.502 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.502 ++ grep -v 'sudo pgrep' 00:00:05.502 ++ awk '{print $1}' 00:00:05.502 + sudo kill -9 00:00:05.502 + true 00:00:05.525 [Pipeline] cleanWs 00:00:05.533 [WS-CLEANUP] Deleting project workspace... 00:00:05.534 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.543 [WS-CLEANUP] done 00:00:05.545 [Pipeline] setCustomBuildProperty 00:00:05.558 [Pipeline] sh 00:00:05.841 + sudo git config --global --replace-all safe.directory '*' 00:00:05.935 [Pipeline] httpRequest 00:00:05.957 [Pipeline] echo 00:00:05.959 Sorcerer 10.211.164.101 is alive 00:00:05.966 [Pipeline] httpRequest 00:00:05.971 HttpMethod: GET 00:00:05.972 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.972 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.991 Response Code: HTTP/1.1 200 OK 00:00:05.992 Success: Status code 200 is in the accepted range: 200,404 00:00:05.992 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:20.363 [Pipeline] sh 00:00:20.644 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:20.919 [Pipeline] httpRequest 00:00:20.937 [Pipeline] echo 00:00:20.939 Sorcerer 10.211.164.101 is alive 00:00:20.949 [Pipeline] httpRequest 00:00:20.953 HttpMethod: GET 00:00:20.954 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:20.954 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:20.971 Response Code: HTTP/1.1 200 OK 00:00:20.972 Success: Status code 200 is in the accepted range: 200,404 00:00:20.972 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:43.911 [Pipeline] sh 00:00:44.195 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:50.779 [Pipeline] sh 00:00:51.063 + git -C spdk log --oneline -n5 00:00:51.063 f7b31b2b9 log: declare g_deprecation_epoch static 00:00:51.063 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:00:51.063 3731556bd lvol: declare g_lvol_if static 00:00:51.063 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:00:51.063 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:00:51.080 [Pipeline] withCredentials 00:00:51.091 > git --version # timeout=10 00:00:51.102 > git --version # 'git version 2.39.2' 00:00:51.123 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:51.126 [Pipeline] { 00:00:51.135 [Pipeline] retry 00:00:51.137 [Pipeline] { 00:00:51.154 [Pipeline] sh 00:00:51.672 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:52.250 [Pipeline] } 00:00:52.271 [Pipeline] // retry 00:00:52.276 [Pipeline] } 00:00:52.292 [Pipeline] // withCredentials 00:00:52.300 [Pipeline] httpRequest 00:00:52.324 [Pipeline] echo 00:00:52.325 Sorcerer 10.211.164.101 is alive 00:00:52.333 [Pipeline] httpRequest 00:00:52.338 HttpMethod: GET 00:00:52.338 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:52.339 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:52.344 Response Code: HTTP/1.1 200 OK 00:00:52.344 Success: Status code 200 is in the accepted range: 200,404 00:00:52.345 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.930 [Pipeline] sh 00:01:08.217 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:11.515 [Pipeline] sh 00:01:11.802 + git -C dpdk log --oneline -n5 00:01:11.802 caf0f5d395 version: 22.11.4 00:01:11.802 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:11.802 dc9c799c7d vhost: fix missing spinlock unlock 00:01:11.802 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:11.802 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:11.812 [Pipeline] } 00:01:11.830 [Pipeline] // stage 00:01:11.840 [Pipeline] stage 00:01:11.843 [Pipeline] { (Prepare) 00:01:11.859 [Pipeline] writeFile 00:01:11.872 [Pipeline] sh 00:01:12.154 + logger -p user.info -t JENKINS-CI 00:01:12.166 [Pipeline] sh 00:01:12.452 + logger -p user.info -t JENKINS-CI 00:01:12.463 [Pipeline] sh 00:01:12.748 + cat autorun-spdk.conf 00:01:12.749 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.749 SPDK_TEST_NVMF=1 00:01:12.749 SPDK_TEST_NVME_CLI=1 00:01:12.749 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.749 SPDK_TEST_NVMF_NICS=e810 00:01:12.749 SPDK_TEST_VFIOUSER=1 00:01:12.749 SPDK_RUN_UBSAN=1 00:01:12.749 NET_TYPE=phy 00:01:12.749 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.749 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:12.756 RUN_NIGHTLY=1 00:01:12.762 [Pipeline] readFile 00:01:12.801 [Pipeline] withEnv 00:01:12.803 [Pipeline] { 00:01:12.818 [Pipeline] sh 00:01:13.105 + set -ex 00:01:13.106 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:13.106 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.106 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.106 ++ SPDK_TEST_NVMF=1 00:01:13.106 ++ SPDK_TEST_NVME_CLI=1 00:01:13.106 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.106 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.106 ++ SPDK_TEST_VFIOUSER=1 00:01:13.106 ++ SPDK_RUN_UBSAN=1 00:01:13.106 ++ NET_TYPE=phy 00:01:13.106 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:13.106 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.106 ++ RUN_NIGHTLY=1 00:01:13.106 + case $SPDK_TEST_NVMF_NICS in 00:01:13.106 + DRIVERS=ice 00:01:13.106 + [[ tcp == \r\d\m\a ]] 00:01:13.106 + [[ -n ice ]] 00:01:13.106 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:13.106 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:13.106 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:13.106 rmmod: ERROR: Module irdma is not currently loaded 00:01:13.106 rmmod: ERROR: Module i40iw is not currently loaded 00:01:13.106 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:13.106 + true 00:01:13.106 + for D in $DRIVERS 00:01:13.106 + sudo modprobe ice 00:01:13.106 + exit 0 00:01:13.116 [Pipeline] } 00:01:13.134 [Pipeline] // withEnv 00:01:13.140 [Pipeline] } 00:01:13.158 [Pipeline] // stage 00:01:13.169 [Pipeline] catchError 00:01:13.170 [Pipeline] { 00:01:13.187 [Pipeline] timeout 00:01:13.187 Timeout set to expire in 50 min 00:01:13.189 [Pipeline] { 00:01:13.205 [Pipeline] stage 00:01:13.208 [Pipeline] { (Tests) 00:01:13.225 [Pipeline] sh 00:01:13.510 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.510 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.510 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.510 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.510 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.510 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.510 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.510 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.510 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.510 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.510 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.510 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.510 + source /etc/os-release 00:01:13.510 ++ NAME='Fedora Linux' 00:01:13.510 ++ VERSION='38 (Cloud Edition)' 00:01:13.510 ++ ID=fedora 00:01:13.510 ++ VERSION_ID=38 00:01:13.510 ++ VERSION_CODENAME= 00:01:13.510 ++ PLATFORM_ID=platform:f38 00:01:13.510 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.510 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.510 ++ LOGO=fedora-logo-icon 00:01:13.510 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.510 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.510 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.510 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.510 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.510 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.510 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.510 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.510 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.510 ++ SUPPORT_END=2024-05-14 00:01:13.510 ++ VARIANT='Cloud Edition' 00:01:13.510 ++ VARIANT_ID=cloud 00:01:13.510 + uname -a 00:01:13.510 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.510 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.417 Hugepages 00:01:15.417 node hugesize free / total 00:01:15.417 node0 1048576kB 0 / 0 00:01:15.417 node0 2048kB 0 / 0 00:01:15.417 node1 1048576kB 0 / 0 00:01:15.417 node1 2048kB 0 / 0 00:01:15.417 00:01:15.417 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.417 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:15.417 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:15.417 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:15.417 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:15.417 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:15.417 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:15.417 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:15.417 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:15.417 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:15.417 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.417 + rm -f /tmp/spdk-ld-path 00:01:15.417 + source autorun-spdk.conf 00:01:15.417 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.417 ++ SPDK_TEST_NVMF=1 00:01:15.417 ++ SPDK_TEST_NVME_CLI=1 00:01:15.417 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.417 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.417 ++ SPDK_TEST_VFIOUSER=1 00:01:15.417 ++ SPDK_RUN_UBSAN=1 00:01:15.417 ++ NET_TYPE=phy 00:01:15.417 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:15.417 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.417 ++ RUN_NIGHTLY=1 00:01:15.417 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.417 + [[ -n '' ]] 00:01:15.417 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.417 + for M in /var/spdk/build-*-manifest.txt 00:01:15.417 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.417 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.417 + for M in /var/spdk/build-*-manifest.txt 00:01:15.417 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.417 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.417 ++ uname 00:01:15.417 + [[ Linux == \L\i\n\u\x ]] 00:01:15.417 + sudo dmesg -T 00:01:15.417 + sudo dmesg --clear 00:01:15.417 + dmesg_pid=617769 00:01:15.417 + [[ Fedora Linux == FreeBSD ]] 00:01:15.417 + sudo dmesg -Tw 00:01:15.417 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.417 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.417 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.417 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.417 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.417 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.417 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.417 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.417 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.417 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.417 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.417 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.417 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.417 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.417 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.417 Test configuration: 00:01:15.417 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.417 SPDK_TEST_NVMF=1 00:01:15.417 SPDK_TEST_NVME_CLI=1 00:01:15.417 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.417 SPDK_TEST_NVMF_NICS=e810 00:01:15.417 SPDK_TEST_VFIOUSER=1 00:01:15.417 SPDK_RUN_UBSAN=1 00:01:15.417 NET_TYPE=phy 00:01:15.417 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:15.417 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.417 RUN_NIGHTLY=1 22:41:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.417 22:41:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.417 22:41:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.417 22:41:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.417 22:41:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.417 22:41:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.417 22:41:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.417 22:41:51 -- paths/export.sh@5 -- $ export PATH 00:01:15.418 22:41:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.418 22:41:51 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.418 22:41:51 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:15.418 22:41:51 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721680911.XXXXXX 00:01:15.688 22:41:51 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721680911.lLNwMI 00:01:15.688 22:41:51 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:15.688 22:41:51 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:01:15.688 22:41:51 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.688 22:41:51 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:15.688 22:41:51 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.688 22:41:51 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.688 22:41:51 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:15.688 22:41:51 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:15.688 22:41:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.688 22:41:51 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:15.688 22:41:51 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:15.688 22:41:51 -- pm/common@17 -- $ local monitor 00:01:15.688 22:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 22:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 22:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 22:41:51 -- pm/common@21 -- $ date +%s 00:01:15.688 22:41:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 22:41:51 -- pm/common@21 -- $ date +%s 00:01:15.688 22:41:51 -- pm/common@25 -- $ sleep 1 00:01:15.688 22:41:51 -- pm/common@21 -- $ date +%s 00:01:15.688 22:41:51 -- pm/common@21 -- $ date +%s 00:01:15.688 22:41:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721680911 00:01:15.688 22:41:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721680911 00:01:15.688 22:41:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721680911 00:01:15.688 22:41:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721680911 00:01:15.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721680911_collect-vmstat.pm.log 00:01:15.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721680911_collect-cpu-load.pm.log 00:01:15.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721680911_collect-cpu-temp.pm.log 00:01:15.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721680911_collect-bmc-pm.bmc.pm.log 00:01:16.648 22:41:52 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:16.648 22:41:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.648 22:41:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.648 22:41:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.648 22:41:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.648 Mon Jul 22 08:41:52 PM UTC 2024 00:01:16.648 22:41:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.648 v24.09-pre-297-gf7b31b2b9 00:01:16.648 22:41:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.648 22:41:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.648 22:41:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.648 22:41:52 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:16.648 22:41:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:16.648 22:41:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.648 ************************************ 00:01:16.648 START TEST ubsan 00:01:16.648 ************************************ 00:01:16.648 22:41:52 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:16.648 using ubsan 00:01:16.648 00:01:16.649 real 0m0.000s 00:01:16.649 user 0m0.000s 00:01:16.649 sys 0m0.000s 00:01:16.649 22:41:52 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:16.649 22:41:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.649 ************************************ 00:01:16.649 END TEST ubsan 00:01:16.649 ************************************ 00:01:16.649 22:41:52 -- common/autotest_common.sh@1142 -- $ return 0 00:01:16.649 22:41:52 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:16.649 22:41:52 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:16.649 22:41:52 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:16.649 22:41:52 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:16.649 22:41:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:16.649 22:41:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.649 ************************************ 00:01:16.649 START TEST build_native_dpdk 00:01:16.649 ************************************ 00:01:16.649 22:41:52 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:16.649 caf0f5d395 version: 22.11.4 00:01:16.649 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:16.649 dc9c799c7d vhost: fix missing spinlock unlock 00:01:16.649 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:16.649 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:16.649 patching file config/rte_config.h 00:01:16.649 Hunk #1 succeeded at 60 (offset 1 line). 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:16.649 22:41:52 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:16.649 patching file lib/pcapng/rte_pcapng.c 00:01:16.649 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:16.649 22:41:52 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:23.216 The Meson build system 00:01:23.216 Version: 1.3.1 00:01:23.216 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:23.216 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:23.216 Build type: native build 00:01:23.216 Program cat found: YES (/usr/bin/cat) 00:01:23.216 Project name: DPDK 00:01:23.216 Project version: 22.11.4 00:01:23.216 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.216 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:23.216 Host machine cpu family: x86_64 00:01:23.216 Host machine cpu: x86_64 00:01:23.216 Message: ## Building in Developer Mode ## 00:01:23.216 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:23.216 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:23.216 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:23.216 Program objdump found: YES (/usr/bin/objdump) 00:01:23.216 Program python3 found: YES (/usr/bin/python3) 00:01:23.216 Program cat found: YES (/usr/bin/cat) 00:01:23.216 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:23.216 Checking for size of "void *" : 8 00:01:23.216 Checking for size of "void *" : 8 (cached) 00:01:23.216 Library m found: YES 00:01:23.216 Library numa found: YES 00:01:23.216 Has header "numaif.h" : YES 00:01:23.216 Library fdt found: NO 00:01:23.216 Library execinfo found: NO 00:01:23.216 Has header "execinfo.h" : YES 00:01:23.216 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.216 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:23.216 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:23.216 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:23.216 Run-time dependency openssl found: YES 3.0.9 00:01:23.216 Run-time dependency libpcap found: YES 1.10.4 00:01:23.216 Has header "pcap.h" with dependency libpcap: YES 00:01:23.216 Compiler for C supports arguments -Wcast-qual: YES 00:01:23.216 Compiler for C supports arguments -Wdeprecated: YES 00:01:23.216 Compiler for C supports arguments -Wformat: YES 00:01:23.216 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:23.216 Compiler for C supports arguments -Wformat-security: NO 00:01:23.216 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.216 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:23.216 Compiler for C supports arguments -Wnested-externs: YES 00:01:23.216 Compiler for C supports arguments -Wold-style-definition: YES 00:01:23.216 Compiler for C supports arguments -Wpointer-arith: YES 00:01:23.216 Compiler for C supports arguments -Wsign-compare: YES 00:01:23.216 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:23.216 Compiler for C supports arguments -Wundef: YES 00:01:23.216 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.216 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:23.216 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:23.217 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.217 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:23.217 Compiler for C supports arguments -mavx512f: YES 00:01:23.217 Checking if "AVX512 checking" compiles: YES 00:01:23.217 Fetching value of define "__SSE4_2__" : 1 00:01:23.217 Fetching value of define "__AES__" : 1 00:01:23.217 Fetching value of define "__AVX__" : 1 00:01:23.217 Fetching value of define "__AVX2__" : (undefined) 00:01:23.217 Fetching value of define "__AVX512BW__" : (undefined) 00:01:23.217 Fetching value of define "__AVX512CD__" : (undefined) 00:01:23.217 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:23.217 Fetching value of define "__AVX512F__" : (undefined) 00:01:23.217 Fetching value of define "__AVX512VL__" : (undefined) 00:01:23.217 Fetching value of define "__PCLMUL__" : 1 00:01:23.217 Fetching value of define "__RDRND__" : 1 00:01:23.217 Fetching value of define "__RDSEED__" : (undefined) 00:01:23.217 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:23.217 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:23.217 Message: lib/kvargs: Defining dependency "kvargs" 00:01:23.217 Message: lib/telemetry: Defining dependency "telemetry" 00:01:23.217 Checking for function "getentropy" : YES 00:01:23.217 Message: lib/eal: Defining dependency "eal" 00:01:23.217 Message: lib/ring: Defining dependency "ring" 00:01:23.217 Message: lib/rcu: Defining dependency "rcu" 00:01:23.217 Message: lib/mempool: Defining dependency "mempool" 00:01:23.217 Message: lib/mbuf: Defining dependency "mbuf" 00:01:23.217 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:23.217 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:23.217 Compiler for C supports arguments -mpclmul: YES 00:01:23.217 Compiler for C supports arguments -maes: YES 00:01:23.217 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:23.217 Compiler for C supports arguments -mavx512bw: YES 00:01:23.217 Compiler for C supports arguments -mavx512dq: YES 00:01:23.217 Compiler for C supports arguments -mavx512vl: YES 00:01:23.217 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:23.217 Compiler for C supports arguments -mavx2: YES 00:01:23.217 Compiler for C supports arguments -mavx: YES 00:01:23.217 Message: lib/net: Defining dependency "net" 00:01:23.217 Message: lib/meter: Defining dependency "meter" 00:01:23.217 Message: lib/ethdev: Defining dependency "ethdev" 00:01:23.217 Message: lib/pci: Defining dependency "pci" 00:01:23.217 Message: lib/cmdline: Defining dependency "cmdline" 00:01:23.217 Message: lib/metrics: Defining dependency "metrics" 00:01:23.217 Message: lib/hash: Defining dependency "hash" 00:01:23.217 Message: lib/timer: Defining dependency "timer" 00:01:23.217 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:23.217 Compiler for C supports arguments -mavx2: YES (cached) 00:01:23.217 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:23.217 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:23.217 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:23.217 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:23.217 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:23.217 Message: lib/acl: Defining dependency "acl" 00:01:23.217 Message: lib/bbdev: Defining dependency "bbdev" 00:01:23.217 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:23.217 Run-time dependency libelf found: YES 0.190 00:01:23.217 Message: lib/bpf: Defining dependency "bpf" 00:01:23.217 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:23.217 Message: lib/compressdev: Defining dependency "compressdev" 00:01:23.217 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:23.217 Message: lib/distributor: Defining dependency "distributor" 00:01:23.217 Message: lib/efd: Defining dependency "efd" 00:01:23.217 Message: lib/eventdev: Defining dependency "eventdev" 00:01:23.217 Message: lib/gpudev: Defining dependency "gpudev" 00:01:23.217 Message: lib/gro: Defining dependency "gro" 00:01:23.217 Message: lib/gso: Defining dependency "gso" 00:01:23.217 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:23.217 Message: lib/jobstats: Defining dependency "jobstats" 00:01:23.217 Message: lib/latencystats: Defining dependency "latencystats" 00:01:23.217 Message: lib/lpm: Defining dependency "lpm" 00:01:23.217 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:23.217 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:23.217 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:23.217 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:23.217 Message: lib/member: Defining dependency "member" 00:01:23.217 Message: lib/pcapng: Defining dependency "pcapng" 00:01:23.217 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:23.217 Message: lib/power: Defining dependency "power" 00:01:23.217 Message: lib/rawdev: Defining dependency "rawdev" 00:01:23.217 Message: lib/regexdev: Defining dependency "regexdev" 00:01:23.217 Message: lib/dmadev: Defining dependency "dmadev" 00:01:23.217 Message: lib/rib: Defining dependency "rib" 00:01:23.217 Message: lib/reorder: Defining dependency "reorder" 00:01:23.217 Message: lib/sched: Defining dependency "sched" 00:01:23.217 Message: lib/security: Defining dependency "security" 00:01:23.217 Message: lib/stack: Defining dependency "stack" 00:01:23.217 Has header "linux/userfaultfd.h" : YES 00:01:23.217 Message: lib/vhost: Defining dependency "vhost" 00:01:23.217 Message: lib/ipsec: Defining dependency "ipsec" 00:01:23.217 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:23.217 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:23.217 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:23.217 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:23.217 Message: lib/fib: Defining dependency "fib" 00:01:23.217 Message: lib/port: Defining dependency "port" 00:01:23.217 Message: lib/pdump: Defining dependency "pdump" 00:01:23.217 Message: lib/table: Defining dependency "table" 00:01:23.217 Message: lib/pipeline: Defining dependency "pipeline" 00:01:23.217 Message: lib/graph: Defining dependency "graph" 00:01:23.217 Message: lib/node: Defining dependency "node" 00:01:23.217 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:23.217 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:23.217 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:23.217 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:23.217 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:23.217 Compiler for C supports arguments -Wno-unused-value: YES 00:01:24.597 Compiler for C supports arguments -Wno-format: YES 00:01:24.597 Compiler for C supports arguments -Wno-format-security: YES 00:01:24.597 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:24.597 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:24.597 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:24.597 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:24.597 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:24.597 Compiler for C supports arguments -mavx2: YES (cached) 00:01:24.597 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:24.597 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.597 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:24.597 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:24.597 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:24.597 Program doxygen found: YES (/usr/bin/doxygen) 00:01:24.597 Configuring doxy-api.conf using configuration 00:01:24.597 Program sphinx-build found: NO 00:01:24.597 Configuring rte_build_config.h using configuration 00:01:24.597 Message: 00:01:24.597 ================= 00:01:24.597 Applications Enabled 00:01:24.597 ================= 00:01:24.597 00:01:24.597 apps: 00:01:24.597 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:24.597 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:24.597 test-security-perf, 00:01:24.597 00:01:24.597 Message: 00:01:24.597 ================= 00:01:24.597 Libraries Enabled 00:01:24.597 ================= 00:01:24.598 00:01:24.598 libs: 00:01:24.598 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:24.598 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:24.598 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:24.598 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:24.598 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:24.598 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:24.598 table, pipeline, graph, node, 00:01:24.598 00:01:24.598 Message: 00:01:24.598 =============== 00:01:24.598 Drivers Enabled 00:01:24.598 =============== 00:01:24.598 00:01:24.598 common: 00:01:24.598 00:01:24.598 bus: 00:01:24.598 pci, vdev, 00:01:24.598 mempool: 00:01:24.598 ring, 00:01:24.598 dma: 00:01:24.598 00:01:24.598 net: 00:01:24.598 i40e, 00:01:24.598 raw: 00:01:24.598 00:01:24.598 crypto: 00:01:24.598 00:01:24.598 compress: 00:01:24.598 00:01:24.598 regex: 00:01:24.598 00:01:24.598 vdpa: 00:01:24.598 00:01:24.598 event: 00:01:24.598 00:01:24.598 baseband: 00:01:24.598 00:01:24.598 gpu: 00:01:24.598 00:01:24.598 00:01:24.598 Message: 00:01:24.598 ================= 00:01:24.598 Content Skipped 00:01:24.598 ================= 00:01:24.598 00:01:24.598 apps: 00:01:24.598 00:01:24.598 libs: 00:01:24.598 kni: explicitly disabled via build config (deprecated lib) 00:01:24.598 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:24.598 00:01:24.598 drivers: 00:01:24.598 common/cpt: not in enabled drivers build config 00:01:24.598 common/dpaax: not in enabled drivers build config 00:01:24.598 common/iavf: not in enabled drivers build config 00:01:24.598 common/idpf: not in enabled drivers build config 00:01:24.598 common/mvep: not in enabled drivers build config 00:01:24.598 common/octeontx: not in enabled drivers build config 00:01:24.598 bus/auxiliary: not in enabled drivers build config 00:01:24.598 bus/dpaa: not in enabled drivers build config 00:01:24.598 bus/fslmc: not in enabled drivers build config 00:01:24.598 bus/ifpga: not in enabled drivers build config 00:01:24.598 bus/vmbus: not in enabled drivers build config 00:01:24.598 common/cnxk: not in enabled drivers build config 00:01:24.598 common/mlx5: not in enabled drivers build config 00:01:24.598 common/qat: not in enabled drivers build config 00:01:24.598 common/sfc_efx: not in enabled drivers build config 00:01:24.598 mempool/bucket: not in enabled drivers build config 00:01:24.598 mempool/cnxk: not in enabled drivers build config 00:01:24.598 mempool/dpaa: not in enabled drivers build config 00:01:24.598 mempool/dpaa2: not in enabled drivers build config 00:01:24.598 mempool/octeontx: not in enabled drivers build config 00:01:24.598 mempool/stack: not in enabled drivers build config 00:01:24.598 dma/cnxk: not in enabled drivers build config 00:01:24.598 dma/dpaa: not in enabled drivers build config 00:01:24.598 dma/dpaa2: not in enabled drivers build config 00:01:24.598 dma/hisilicon: not in enabled drivers build config 00:01:24.598 dma/idxd: not in enabled drivers build config 00:01:24.598 dma/ioat: not in enabled drivers build config 00:01:24.598 dma/skeleton: not in enabled drivers build config 00:01:24.598 net/af_packet: not in enabled drivers build config 00:01:24.598 net/af_xdp: not in enabled drivers build config 00:01:24.598 net/ark: not in enabled drivers build config 00:01:24.598 net/atlantic: not in enabled drivers build config 00:01:24.598 net/avp: not in enabled drivers build config 00:01:24.598 net/axgbe: not in enabled drivers build config 00:01:24.598 net/bnx2x: not in enabled drivers build config 00:01:24.598 net/bnxt: not in enabled drivers build config 00:01:24.598 net/bonding: not in enabled drivers build config 00:01:24.598 net/cnxk: not in enabled drivers build config 00:01:24.598 net/cxgbe: not in enabled drivers build config 00:01:24.598 net/dpaa: not in enabled drivers build config 00:01:24.598 net/dpaa2: not in enabled drivers build config 00:01:24.598 net/e1000: not in enabled drivers build config 00:01:24.598 net/ena: not in enabled drivers build config 00:01:24.598 net/enetc: not in enabled drivers build config 00:01:24.598 net/enetfec: not in enabled drivers build config 00:01:24.598 net/enic: not in enabled drivers build config 00:01:24.598 net/failsafe: not in enabled drivers build config 00:01:24.598 net/fm10k: not in enabled drivers build config 00:01:24.598 net/gve: not in enabled drivers build config 00:01:24.598 net/hinic: not in enabled drivers build config 00:01:24.598 net/hns3: not in enabled drivers build config 00:01:24.598 net/iavf: not in enabled drivers build config 00:01:24.598 net/ice: not in enabled drivers build config 00:01:24.598 net/idpf: not in enabled drivers build config 00:01:24.598 net/igc: not in enabled drivers build config 00:01:24.598 net/ionic: not in enabled drivers build config 00:01:24.598 net/ipn3ke: not in enabled drivers build config 00:01:24.598 net/ixgbe: not in enabled drivers build config 00:01:24.598 net/kni: not in enabled drivers build config 00:01:24.598 net/liquidio: not in enabled drivers build config 00:01:24.598 net/mana: not in enabled drivers build config 00:01:24.598 net/memif: not in enabled drivers build config 00:01:24.598 net/mlx4: not in enabled drivers build config 00:01:24.598 net/mlx5: not in enabled drivers build config 00:01:24.598 net/mvneta: not in enabled drivers build config 00:01:24.598 net/mvpp2: not in enabled drivers build config 00:01:24.598 net/netvsc: not in enabled drivers build config 00:01:24.598 net/nfb: not in enabled drivers build config 00:01:24.598 net/nfp: not in enabled drivers build config 00:01:24.598 net/ngbe: not in enabled drivers build config 00:01:24.598 net/null: not in enabled drivers build config 00:01:24.598 net/octeontx: not in enabled drivers build config 00:01:24.598 net/octeon_ep: not in enabled drivers build config 00:01:24.598 net/pcap: not in enabled drivers build config 00:01:24.598 net/pfe: not in enabled drivers build config 00:01:24.598 net/qede: not in enabled drivers build config 00:01:24.598 net/ring: not in enabled drivers build config 00:01:24.598 net/sfc: not in enabled drivers build config 00:01:24.598 net/softnic: not in enabled drivers build config 00:01:24.598 net/tap: not in enabled drivers build config 00:01:24.598 net/thunderx: not in enabled drivers build config 00:01:24.598 net/txgbe: not in enabled drivers build config 00:01:24.598 net/vdev_netvsc: not in enabled drivers build config 00:01:24.598 net/vhost: not in enabled drivers build config 00:01:24.598 net/virtio: not in enabled drivers build config 00:01:24.598 net/vmxnet3: not in enabled drivers build config 00:01:24.598 raw/cnxk_bphy: not in enabled drivers build config 00:01:24.598 raw/cnxk_gpio: not in enabled drivers build config 00:01:24.598 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:24.598 raw/ifpga: not in enabled drivers build config 00:01:24.598 raw/ntb: not in enabled drivers build config 00:01:24.598 raw/skeleton: not in enabled drivers build config 00:01:24.598 crypto/armv8: not in enabled drivers build config 00:01:24.598 crypto/bcmfs: not in enabled drivers build config 00:01:24.598 crypto/caam_jr: not in enabled drivers build config 00:01:24.598 crypto/ccp: not in enabled drivers build config 00:01:24.598 crypto/cnxk: not in enabled drivers build config 00:01:24.598 crypto/dpaa_sec: not in enabled drivers build config 00:01:24.598 crypto/dpaa2_sec: not in enabled drivers build config 00:01:24.598 crypto/ipsec_mb: not in enabled drivers build config 00:01:24.598 crypto/mlx5: not in enabled drivers build config 00:01:24.598 crypto/mvsam: not in enabled drivers build config 00:01:24.598 crypto/nitrox: not in enabled drivers build config 00:01:24.598 crypto/null: not in enabled drivers build config 00:01:24.598 crypto/octeontx: not in enabled drivers build config 00:01:24.598 crypto/openssl: not in enabled drivers build config 00:01:24.598 crypto/scheduler: not in enabled drivers build config 00:01:24.598 crypto/uadk: not in enabled drivers build config 00:01:24.598 crypto/virtio: not in enabled drivers build config 00:01:24.598 compress/isal: not in enabled drivers build config 00:01:24.598 compress/mlx5: not in enabled drivers build config 00:01:24.598 compress/octeontx: not in enabled drivers build config 00:01:24.598 compress/zlib: not in enabled drivers build config 00:01:24.598 regex/mlx5: not in enabled drivers build config 00:01:24.598 regex/cn9k: not in enabled drivers build config 00:01:24.598 vdpa/ifc: not in enabled drivers build config 00:01:24.598 vdpa/mlx5: not in enabled drivers build config 00:01:24.598 vdpa/sfc: not in enabled drivers build config 00:01:24.598 event/cnxk: not in enabled drivers build config 00:01:24.598 event/dlb2: not in enabled drivers build config 00:01:24.598 event/dpaa: not in enabled drivers build config 00:01:24.598 event/dpaa2: not in enabled drivers build config 00:01:24.598 event/dsw: not in enabled drivers build config 00:01:24.598 event/opdl: not in enabled drivers build config 00:01:24.598 event/skeleton: not in enabled drivers build config 00:01:24.598 event/sw: not in enabled drivers build config 00:01:24.598 event/octeontx: not in enabled drivers build config 00:01:24.598 baseband/acc: not in enabled drivers build config 00:01:24.598 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:24.598 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:24.598 baseband/la12xx: not in enabled drivers build config 00:01:24.598 baseband/null: not in enabled drivers build config 00:01:24.598 baseband/turbo_sw: not in enabled drivers build config 00:01:24.598 gpu/cuda: not in enabled drivers build config 00:01:24.598 00:01:24.598 00:01:24.598 Build targets in project: 316 00:01:24.598 00:01:24.598 DPDK 22.11.4 00:01:24.598 00:01:24.599 User defined options 00:01:24.599 libdir : lib 00:01:24.599 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.599 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:24.599 c_link_args : 00:01:24.599 enable_docs : false 00:01:24.599 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:24.599 enable_kmods : false 00:01:24.599 machine : native 00:01:24.599 tests : false 00:01:24.599 00:01:24.599 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.599 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:24.599 22:42:00 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:24.599 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:24.866 [1/745] Generating lib/rte_telemetry_def with a custom command 00:01:24.866 [2/745] Generating lib/rte_kvargs_def with a custom command 00:01:24.866 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:24.866 [4/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.866 [5/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:24.866 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.866 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.866 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.866 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.866 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.866 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.866 [12/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.866 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:24.866 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.866 [15/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.866 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.866 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:24.866 [18/745] Linking static target lib/librte_kvargs.a 00:01:24.866 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.866 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:25.128 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:25.128 [22/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:25.128 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:25.128 [24/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:25.129 [25/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:25.129 [26/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:25.129 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:25.129 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:25.129 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:25.129 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:25.129 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:25.129 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:25.129 [33/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:25.129 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:25.129 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:25.129 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:25.129 [37/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:25.129 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.129 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:25.129 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:25.129 [41/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.129 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:25.129 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:25.129 [44/745] Generating lib/rte_eal_mingw with a custom command 00:01:25.129 [45/745] Generating lib/rte_ring_def with a custom command 00:01:25.129 [46/745] Generating lib/rte_ring_mingw with a custom command 00:01:25.129 [47/745] Generating lib/rte_eal_def with a custom command 00:01:25.129 [48/745] Generating lib/rte_rcu_def with a custom command 00:01:25.129 [49/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:25.129 [50/745] Generating lib/rte_rcu_mingw with a custom command 00:01:25.129 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.129 [52/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:25.129 [53/745] Generating lib/rte_mempool_mingw with a custom command 00:01:25.129 [54/745] Generating lib/rte_mempool_def with a custom command 00:01:25.129 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:25.129 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:25.129 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:25.129 [58/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:25.129 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:25.394 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:25.394 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:25.394 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:25.394 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.394 [64/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:25.394 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:25.394 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:25.394 [67/745] Generating lib/rte_net_def with a custom command 00:01:25.395 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:25.395 [69/745] Generating lib/rte_meter_def with a custom command 00:01:25.395 [70/745] Generating lib/rte_net_mingw with a custom command 00:01:25.395 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:25.395 [72/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.395 [73/745] Generating lib/rte_meter_mingw with a custom command 00:01:25.395 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:25.395 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.395 [76/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.395 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:25.395 [78/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.395 [79/745] Generating lib/rte_ethdev_def with a custom command 00:01:25.395 [80/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.395 [81/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:25.659 [82/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:25.659 [83/745] Generating lib/rte_pci_def with a custom command 00:01:25.659 [84/745] Linking static target lib/librte_ring.a 00:01:25.659 [85/745] Linking target lib/librte_kvargs.so.23.0 00:01:25.659 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:25.659 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.659 [88/745] Linking static target lib/librte_meter.a 00:01:25.659 [89/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:25.659 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:25.659 [91/745] Linking static target lib/librte_pci.a 00:01:25.659 [92/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:25.659 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:25.659 [94/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:25.659 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:25.922 [96/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:25.922 [97/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:25.922 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:25.922 [99/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.922 [100/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:25.922 [101/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:26.187 [102/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:26.187 [103/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.187 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:26.187 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:26.187 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:26.187 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:26.187 [108/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:26.187 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:26.187 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:26.187 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:26.187 [112/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.187 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:26.187 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:26.187 [115/745] Generating lib/rte_hash_mingw with a custom command 00:01:26.187 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:01:26.187 [117/745] Generating lib/rte_timer_def with a custom command 00:01:26.187 [118/745] Generating lib/rte_hash_def with a custom command 00:01:26.187 [119/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:26.187 [120/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:26.187 [121/745] Generating lib/rte_timer_mingw with a custom command 00:01:26.451 [122/745] Linking static target lib/librte_telemetry.a 00:01:26.451 [123/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:26.451 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:26.451 [125/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:26.451 [126/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:26.451 [127/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:26.451 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:26.451 [129/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:26.451 [130/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:26.451 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:26.451 [132/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:26.451 [133/745] Generating lib/rte_acl_def with a custom command 00:01:26.451 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:26.713 [135/745] Generating lib/rte_bbdev_def with a custom command 00:01:26.713 [136/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:26.713 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:26.713 [138/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:26.713 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:26.713 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:26.713 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:26.713 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:26.978 [143/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:26.978 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:26.978 [145/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:26.978 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:26.978 [147/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.978 [148/745] Generating lib/rte_bpf_def with a custom command 00:01:26.978 [149/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:26.978 [150/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:26.978 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:26.979 [152/745] Generating lib/rte_cfgfile_def with a custom command 00:01:26.979 [153/745] Linking target lib/librte_telemetry.so.23.0 00:01:26.979 [154/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:27.237 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:27.237 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:27.237 [157/745] Generating lib/rte_compressdev_def with a custom command 00:01:27.237 [158/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:27.237 [159/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:27.237 [160/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:27.237 [161/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:27.237 [162/745] Linking static target lib/librte_mempool.a 00:01:27.237 [163/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:27.237 [164/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:27.237 [165/745] Linking static target lib/librte_cmdline.a 00:01:27.237 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:27.237 [167/745] Generating lib/rte_cryptodev_def with a custom command 00:01:27.500 [168/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:27.500 [169/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:27.500 [170/745] Linking static target lib/librte_metrics.a 00:01:27.500 [171/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:27.500 [172/745] Linking static target lib/librte_timer.a 00:01:27.500 [173/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:27.500 [174/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:27.500 [175/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:27.500 [176/745] Generating lib/rte_distributor_def with a custom command 00:01:27.500 [177/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:27.500 [178/745] Generating lib/rte_distributor_mingw with a custom command 00:01:27.500 [179/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:27.500 [180/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:27.500 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:27.500 [182/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:27.500 [183/745] Linking static target lib/librte_cfgfile.a 00:01:27.500 [184/745] Linking static target lib/librte_rcu.a 00:01:27.500 [185/745] Linking static target lib/librte_eal.a 00:01:27.500 [186/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:27.500 [187/745] Generating lib/rte_efd_def with a custom command 00:01:27.500 [188/745] Generating lib/rte_efd_mingw with a custom command 00:01:27.500 [189/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:27.762 [190/745] Linking static target lib/librte_net.a 00:01:27.762 [191/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:27.762 [192/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:27.762 [193/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:27.762 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:27.762 [195/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:27.762 [196/745] Linking static target lib/librte_bitratestats.a 00:01:28.027 [197/745] Generating lib/rte_eventdev_def with a custom command 00:01:28.027 [198/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:28.027 [199/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.027 [200/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:28.027 [201/745] Generating lib/rte_gpudev_def with a custom command 00:01:28.027 [202/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.027 [203/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.027 [204/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:28.027 [205/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.292 [206/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.292 [207/745] Generating lib/rte_gro_def with a custom command 00:01:28.292 [208/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:28.292 [209/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:28.292 [210/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.292 [211/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:28.293 [212/745] Generating lib/rte_gro_mingw with a custom command 00:01:28.293 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:28.293 [214/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:28.555 [215/745] Generating lib/rte_gso_def with a custom command 00:01:28.555 [216/745] Generating lib/rte_gso_mingw with a custom command 00:01:28.555 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:28.555 [218/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:28.825 [219/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.825 [220/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:28.825 [221/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:28.825 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:29.087 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:29.087 [224/745] Generating lib/rte_ip_frag_def with a custom command 00:01:29.087 [225/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:29.087 [226/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:29.087 [227/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.087 [228/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:29.087 [229/745] Generating lib/rte_jobstats_def with a custom command 00:01:29.087 [230/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:29.087 [231/745] Generating lib/rte_latencystats_def with a custom command 00:01:29.087 [232/745] Linking static target lib/librte_bbdev.a 00:01:29.087 [233/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:29.087 [234/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:29.350 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:29.350 [236/745] Linking static target lib/librte_compressdev.a 00:01:29.350 [237/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:29.350 [238/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:29.350 [239/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:29.350 [240/745] Generating lib/rte_lpm_def with a custom command 00:01:29.350 [241/745] Linking static target lib/librte_jobstats.a 00:01:29.350 [242/745] Generating lib/rte_lpm_mingw with a custom command 00:01:29.350 [243/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:29.350 [244/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:29.350 [245/745] Linking static target lib/librte_distributor.a 00:01:29.350 [246/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:29.619 [247/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:29.619 [248/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:29.619 [249/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:29.619 [250/745] Linking static target lib/librte_bpf.a 00:01:29.619 [251/745] Generating lib/rte_member_def with a custom command 00:01:29.619 [252/745] Generating lib/rte_member_mingw with a custom command 00:01:29.619 [253/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:29.619 [254/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:29.619 [255/745] Generating lib/rte_pcapng_def with a custom command 00:01:29.619 [256/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:29.879 [257/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:29.879 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:29.879 [259/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:29.879 [260/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.879 [261/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:29.879 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:29.879 [263/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:29.879 [264/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.879 [265/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:29.879 [266/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:30.144 [267/745] Generating lib/rte_power_def with a custom command 00:01:30.144 [268/745] Generating lib/rte_power_mingw with a custom command 00:01:30.144 [269/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:30.144 [270/745] Generating lib/rte_rawdev_def with a custom command 00:01:30.144 [271/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.144 [272/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:30.144 [273/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:30.144 [274/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:30.144 [275/745] Generating lib/rte_regexdev_def with a custom command 00:01:30.144 [276/745] Generating lib/rte_dmadev_def with a custom command 00:01:30.144 [277/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.144 [278/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:30.411 [279/745] Generating lib/rte_rib_def with a custom command 00:01:30.411 [280/745] Generating lib/rte_rib_mingw with a custom command 00:01:30.411 [281/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:30.411 [282/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:30.411 [283/745] Generating lib/rte_reorder_def with a custom command 00:01:30.411 [284/745] Generating lib/rte_reorder_mingw with a custom command 00:01:30.411 [285/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:30.411 [286/745] Linking static target lib/librte_gpudev.a 00:01:30.411 [287/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:30.411 [288/745] Linking static target lib/librte_gro.a 00:01:30.671 [289/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:30.671 [290/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:30.671 [291/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:30.671 [292/745] Linking static target lib/librte_latencystats.a 00:01:30.671 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:30.671 [294/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:30.671 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:30.671 [296/745] Generating lib/rte_sched_def with a custom command 00:01:30.671 [297/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:30.671 [298/745] Generating lib/rte_sched_mingw with a custom command 00:01:30.671 [299/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:30.933 [300/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:30.933 [301/745] Generating lib/rte_security_def with a custom command 00:01:30.933 [302/745] Generating lib/rte_security_mingw with a custom command 00:01:30.933 [303/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:30.933 [304/745] Generating lib/rte_stack_def with a custom command 00:01:30.933 [305/745] Generating lib/rte_stack_mingw with a custom command 00:01:30.933 [306/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:30.933 [307/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.933 [308/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:30.933 [309/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:30.933 [310/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:30.933 [311/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.933 [312/745] Linking static target lib/librte_dmadev.a 00:01:30.933 [313/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:30.933 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:30.933 [315/745] Linking static target lib/librte_rawdev.a 00:01:30.933 [316/745] Linking static target lib/librte_stack.a 00:01:30.933 [317/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:30.933 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:30.933 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:31.196 [320/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:31.196 [321/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.196 [322/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:31.196 [323/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.196 [324/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:31.196 [325/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:31.196 [326/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:31.196 [327/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:31.196 [328/745] Linking static target lib/librte_ip_frag.a 00:01:31.196 [329/745] Generating lib/rte_ipsec_def with a custom command 00:01:31.196 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:31.461 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.461 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:31.461 [333/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:31.461 [334/745] Linking static target lib/librte_gso.a 00:01:31.461 [335/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:31.727 [336/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:31.727 [337/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.727 [338/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:31.727 [339/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:31.727 [340/745] Generating lib/rte_fib_def with a custom command 00:01:31.727 [341/745] Generating lib/rte_fib_mingw with a custom command 00:01:31.727 [342/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.727 [343/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:31.727 [344/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.727 [345/745] Linking static target lib/librte_pcapng.a 00:01:31.995 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.995 [347/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.258 [348/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:32.258 [349/745] Linking static target lib/librte_lpm.a 00:01:32.258 [350/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.258 [351/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.258 [352/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:32.258 [353/745] Linking static target lib/librte_reorder.a 00:01:32.258 [354/745] Linking static target lib/librte_regexdev.a 00:01:32.258 [355/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.258 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:32.520 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.520 [358/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.520 [359/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.520 [360/745] Linking static target lib/librte_power.a 00:01:32.785 [361/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.785 [362/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:32.785 [363/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.785 [364/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.785 [365/745] Linking static target lib/librte_security.a 00:01:32.785 [366/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:32.785 [367/745] Generating lib/rte_port_mingw with a custom command 00:01:32.785 [368/745] Generating lib/rte_port_def with a custom command 00:01:32.785 [369/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.785 [370/745] Generating lib/rte_pdump_def with a custom command 00:01:32.785 [371/745] Generating lib/rte_pdump_mingw with a custom command 00:01:33.052 [372/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:33.052 [373/745] Linking static target lib/librte_rib.a 00:01:33.052 [374/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:33.052 [375/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:33.052 [376/745] Linking static target lib/librte_ethdev.a 00:01:33.052 [377/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:33.052 [378/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:33.052 [379/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:33.052 [380/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:33.052 [381/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:33.313 [382/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:33.313 [383/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:33.313 [384/745] Linking static target lib/librte_efd.a 00:01:33.313 [385/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:33.313 [386/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:33.313 [387/745] Linking static target lib/librte_mbuf.a 00:01:33.313 [388/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:33.313 [389/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:33.581 [390/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:33.581 [391/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:33.581 [392/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.581 [393/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.842 [394/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:33.842 [395/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.842 [396/745] Linking static target lib/acl/libavx512_tmp.a 00:01:33.842 [397/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:33.842 [398/745] Linking static target lib/acl/libavx2_tmp.a 00:01:33.842 [399/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:33.842 [400/745] Linking static target lib/librte_acl.a 00:01:33.842 [401/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.842 [402/745] Generating lib/rte_table_def with a custom command 00:01:33.842 [403/745] Generating lib/rte_table_mingw with a custom command 00:01:33.842 [404/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:34.104 [405/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:34.104 [406/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:34.104 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:34.104 [408/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:34.104 [409/745] Linking static target lib/librte_member.a 00:01:34.104 [410/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:34.104 [411/745] Linking static target lib/librte_fib.a 00:01:34.104 [412/745] Generating lib/rte_pipeline_def with a custom command 00:01:34.368 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:34.368 [414/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:34.368 [415/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:34.368 [416/745] Linking static target lib/librte_hash.a 00:01:34.368 [417/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.368 [418/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.634 [419/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.634 [420/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:34.634 [421/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:34.634 [422/745] Linking static target lib/librte_eventdev.a 00:01:34.634 [423/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:34.634 [424/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:34.634 [425/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:34.634 [426/745] Linking static target lib/librte_sched.a 00:01:34.634 [427/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:34.634 [428/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.634 [429/745] Generating lib/rte_graph_def with a custom command 00:01:34.634 [430/745] Generating lib/rte_graph_mingw with a custom command 00:01:34.895 [431/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:34.895 [432/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.895 [433/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:34.895 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:34.895 [435/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:34.895 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:34.895 [437/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:35.161 [438/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:35.161 [439/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:35.161 [440/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:35.161 [441/745] Generating lib/rte_node_mingw with a custom command 00:01:35.161 [442/745] Generating lib/rte_node_def with a custom command 00:01:35.161 [443/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:35.161 [444/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.161 [445/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:35.424 [446/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:35.424 [447/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:35.424 [448/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.424 [449/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:35.424 [450/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.424 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:35.424 [452/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.424 [453/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:35.424 [454/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:35.424 [455/745] Linking static target lib/librte_pdump.a 00:01:35.424 [456/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:35.424 [457/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:35.424 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:35.687 [459/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:35.687 [460/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:35.687 [461/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.687 [462/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:35.687 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.687 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.687 [465/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.953 [466/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.953 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:35.953 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:35.953 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:35.953 [470/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:35.953 [471/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:35.953 [472/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.953 [473/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.218 [474/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:36.218 [475/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:36.218 [476/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.218 [477/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.218 [478/745] Linking static target drivers/librte_bus_vdev.a 00:01:36.482 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:36.482 [480/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.482 [481/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:36.482 [482/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:36.482 [483/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:36.482 [484/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:36.746 [485/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.746 [486/745] Linking static target lib/librte_cryptodev.a 00:01:36.746 [487/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:36.746 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:36.746 [489/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:36.746 [490/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:36.746 [491/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:36.746 [492/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.013 [493/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:37.013 [494/745] Linking static target lib/librte_graph.a 00:01:37.013 [495/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:37.013 [496/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:37.013 [497/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:37.013 [498/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:37.281 [499/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:37.281 [500/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:37.281 [501/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:37.281 [502/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:37.281 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:37.281 [504/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:37.281 [505/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:37.281 [506/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.281 [507/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:37.281 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.281 [509/745] Linking static target drivers/librte_bus_pci.a 00:01:37.281 [510/745] Linking static target lib/librte_table.a 00:01:37.281 [511/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:37.543 [512/745] Linking static target lib/librte_port.a 00:01:37.543 [513/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:37.543 [514/745] Linking static target lib/librte_ipsec.a 00:01:37.841 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:37.841 [516/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:38.108 [517/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.108 [518/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.108 [519/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:38.108 [520/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.375 [521/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.640 [522/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.640 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:38.640 [524/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:38.640 [525/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.640 [526/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:38.902 [527/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:38.902 [528/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:39.168 [529/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.168 [530/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.168 [531/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:39.168 [532/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:39.168 [533/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:39.433 [534/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:39.433 [535/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.433 [536/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.433 [537/745] Linking static target drivers/librte_mempool_ring.a 00:01:39.433 [538/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.701 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:39.701 [540/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:39.701 [541/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:39.701 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:39.962 [543/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:39.962 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:39.962 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:40.228 [546/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:40.228 [547/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:40.495 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:40.761 [549/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:40.761 [550/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.761 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:40.761 [552/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:40.761 [553/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:40.761 [554/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:41.340 [555/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:41.340 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:41.340 [557/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:41.600 [558/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:41.600 [559/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:41.600 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:41.600 [561/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:41.600 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:41.871 [563/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:41.871 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:41.871 [565/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:41.871 [566/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:41.871 [567/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:41.871 [568/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:42.133 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:42.408 [570/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.408 [571/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:42.408 [572/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.408 [573/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:42.408 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:42.408 [575/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:42.408 [576/745] Linking target lib/librte_eal.so.23.0 00:01:42.669 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:42.669 [578/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:42.669 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:42.669 [580/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:42.669 [581/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:42.669 [582/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:42.939 [583/745] Linking target lib/librte_ring.so.23.0 00:01:42.939 [584/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:42.939 [585/745] Linking target lib/librte_meter.so.23.0 00:01:42.939 [586/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:42.939 [587/745] Linking target lib/librte_pci.so.23.0 00:01:42.939 [588/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:42.939 [589/745] Linking target lib/librte_timer.so.23.0 00:01:42.939 [590/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:42.939 [591/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:43.216 [592/745] Linking target lib/librte_cfgfile.so.23.0 00:01:43.216 [593/745] Linking target lib/librte_acl.so.23.0 00:01:43.216 [594/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:43.216 [595/745] Linking target lib/librte_jobstats.so.23.0 00:01:43.216 [596/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:43.216 [597/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:43.216 [598/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:43.216 [599/745] Linking target lib/librte_rcu.so.23.0 00:01:43.216 [600/745] Linking target lib/librte_rawdev.so.23.0 00:01:43.216 [601/745] Linking target lib/librte_mempool.so.23.0 00:01:43.216 [602/745] Linking target lib/librte_stack.so.23.0 00:01:43.216 [603/745] Linking target lib/librte_dmadev.so.23.0 00:01:43.216 [604/745] Linking target lib/librte_graph.so.23.0 00:01:43.216 [605/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:43.216 [606/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:43.216 [607/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:43.216 [608/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:43.216 [609/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:43.482 [610/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:43.482 [611/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:43.482 [612/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:43.483 [613/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:43.483 [614/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:43.483 [615/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:43.483 [616/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:43.483 [617/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:43.483 [618/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:43.745 [619/745] Linking target lib/librte_rib.so.23.0 00:01:43.745 [620/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:43.745 [621/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:43.745 [622/745] Linking target lib/librte_mbuf.so.23.0 00:01:43.745 [623/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:43.745 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:44.005 [625/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:44.005 [626/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:44.005 [627/745] Linking target lib/librte_fib.so.23.0 00:01:44.005 [628/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:44.005 [629/745] Linking target lib/librte_bbdev.so.23.0 00:01:44.005 [630/745] Linking target lib/librte_net.so.23.0 00:01:44.005 [631/745] Linking target lib/librte_cryptodev.so.23.0 00:01:44.005 [632/745] Linking target lib/librte_compressdev.so.23.0 00:01:44.005 [633/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:44.005 [634/745] Linking target lib/librte_gpudev.so.23.0 00:01:44.005 [635/745] Linking target lib/librte_distributor.so.23.0 00:01:44.005 [636/745] Linking target lib/librte_reorder.so.23.0 00:01:44.005 [637/745] Linking target lib/librte_sched.so.23.0 00:01:44.005 [638/745] Linking target lib/librte_regexdev.so.23.0 00:01:44.265 [639/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:44.265 [640/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:44.265 [641/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:44.265 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:44.265 [643/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:44.265 [644/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:44.265 [645/745] Linking target lib/librte_hash.so.23.0 00:01:44.265 [646/745] Linking target lib/librte_security.so.23.0 00:01:44.265 [647/745] Linking target lib/librte_cmdline.so.23.0 00:01:44.529 [648/745] Linking target lib/librte_ethdev.so.23.0 00:01:44.529 [649/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:44.529 [650/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:44.529 [651/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:44.529 [652/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:44.529 [653/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:44.529 [654/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:44.792 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:44.792 [656/745] Linking target lib/librte_lpm.so.23.0 00:01:44.792 [657/745] Linking target lib/librte_efd.so.23.0 00:01:44.792 [658/745] Linking target lib/librte_member.so.23.0 00:01:44.792 [659/745] Linking target lib/librte_ipsec.so.23.0 00:01:44.792 [660/745] Linking target lib/librte_pcapng.so.23.0 00:01:44.792 [661/745] Linking target lib/librte_gso.so.23.0 00:01:44.792 [662/745] Linking target lib/librte_metrics.so.23.0 00:01:44.792 [663/745] Linking target lib/librte_gro.so.23.0 00:01:44.792 [664/745] Linking target lib/librte_ip_frag.so.23.0 00:01:44.792 [665/745] Linking target lib/librte_bpf.so.23.0 00:01:44.792 [666/745] Linking target lib/librte_power.so.23.0 00:01:44.792 [667/745] Linking target lib/librte_eventdev.so.23.0 00:01:44.792 [668/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:44.792 [669/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:45.052 [670/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:45.052 [671/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:45.052 [672/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:45.052 [673/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:45.052 [674/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:45.052 [675/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:45.052 [676/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:45.052 [677/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:45.052 [678/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:45.052 [679/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:45.052 [680/745] Linking target lib/librte_latencystats.so.23.0 00:01:45.052 [681/745] Linking target lib/librte_bitratestats.so.23.0 00:01:45.052 [682/745] Linking target lib/librte_port.so.23.0 00:01:45.052 [683/745] Linking target lib/librte_pdump.so.23.0 00:01:45.052 [684/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:45.052 [685/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:45.052 [686/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:45.332 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:45.333 [688/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:45.333 [689/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:45.333 [690/745] Linking target lib/librte_table.so.23.0 00:01:45.594 [691/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:45.594 [692/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:45.594 [693/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:45.594 [694/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.594 [695/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:45.594 [696/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:45.852 [697/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:45.852 [698/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:45.852 [699/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:46.110 [700/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:46.110 [701/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:46.676 [702/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:46.676 [703/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:46.677 [704/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:46.677 [705/745] Linking static target drivers/librte_net_i40e.a 00:01:46.935 [706/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:47.202 [707/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:47.462 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:47.462 [709/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.462 [710/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:48.028 [711/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:49.405 [712/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:49.405 [713/745] Linking static target lib/librte_node.a 00:01:49.663 [714/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:49.663 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.663 [716/745] Linking target lib/librte_node.so.23.0 00:01:52.193 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:52.760 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:54.136 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:09.089 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.759 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.759 [722/745] Linking static target lib/librte_vhost.a 00:02:55.759 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.759 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:22.342 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.342 [726/745] Linking static target lib/librte_pipeline.a 00:03:22.342 [727/745] Linking target app/dpdk-test-sad 00:03:22.342 [728/745] Linking target app/dpdk-test-pipeline 00:03:22.342 [729/745] Linking target app/dpdk-test-flow-perf 00:03:22.342 [730/745] Linking target app/dpdk-test-regex 00:03:22.342 [731/745] Linking target app/dpdk-test-acl 00:03:22.342 [732/745] Linking target app/dpdk-dumpcap 00:03:22.342 [733/745] Linking target app/dpdk-test-eventdev 00:03:22.342 [734/745] Linking target app/dpdk-test-compress-perf 00:03:22.342 [735/745] Linking target app/dpdk-test-cmdline 00:03:22.342 [736/745] Linking target app/dpdk-pdump 00:03:22.342 [737/745] Linking target app/dpdk-proc-info 00:03:22.342 [738/745] Linking target app/dpdk-test-fib 00:03:22.342 [739/745] Linking target app/dpdk-test-gpudev 00:03:22.342 [740/745] Linking target app/dpdk-test-bbdev 00:03:22.342 [741/745] Linking target app/dpdk-test-security-perf 00:03:22.342 [742/745] Linking target app/dpdk-test-crypto-perf 00:03:22.342 [743/745] Linking target app/dpdk-testpmd 00:03:22.342 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.342 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:22.342 22:43:58 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:22.342 22:43:58 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:22.342 22:43:58 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:22.342 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:22.342 [0/1] Installing files. 00:03:22.915 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:22.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:22.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:23.182 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:23.182 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.182 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:23.183 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.125 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.125 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.125 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.125 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:24.125 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:24.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:24.390 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:24.390 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:24.390 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:24.390 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:24.390 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:24.390 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:24.390 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:24.390 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:24.390 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:24.390 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:24.390 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:24.390 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:24.390 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:24.390 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:24.391 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:24.391 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:24.391 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:24.391 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:24.391 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:24.391 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:24.391 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:24.391 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:24.391 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:24.391 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:24.391 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:24.391 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:24.391 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:24.391 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:24.391 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:24.391 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:24.391 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:24.391 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:24.391 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:24.391 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:24.391 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:24.391 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:24.391 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:24.391 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:24.391 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:24.391 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:24.391 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:24.391 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:24.391 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:24.391 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:24.391 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:24.391 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:24.391 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:24.391 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:24.391 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:24.391 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:24.391 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:24.391 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:24.391 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:24.391 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:24.391 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:24.391 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:24.391 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:24.391 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:24.391 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:24.391 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:24.391 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:24.391 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:24.391 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:24.391 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:24.391 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:24.391 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:24.391 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:24.391 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:24.391 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:24.391 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:24.391 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:24.391 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:24.391 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:24.391 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:24.391 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:24.391 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:24.391 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:24.391 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:24.391 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:24.391 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:24.391 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:24.391 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:24.391 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:24.391 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:24.391 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:24.391 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:24.391 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:24.391 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:24.391 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:24.391 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:24.391 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:24.391 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:24.391 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:24.391 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:24.391 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:24.391 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:24.391 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:24.391 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:24.391 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:24.391 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:24.391 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:24.391 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:24.391 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:24.391 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:24.391 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:24.392 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:24.392 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:24.392 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:24.392 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:24.392 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:24.392 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:24.392 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:24.392 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:24.392 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:24.392 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:24.392 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:24.392 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:24.392 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:24.392 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:24.392 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:24.392 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:24.392 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:24.392 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:24.392 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:24.392 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:24.392 22:44:00 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:24.392 22:44:00 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:24.392 00:03:24.392 real 2m7.729s 00:03:24.392 user 17m55.049s 00:03:24.392 sys 2m8.853s 00:03:24.392 22:44:00 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:24.392 22:44:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:24.392 ************************************ 00:03:24.392 END TEST build_native_dpdk 00:03:24.392 ************************************ 00:03:24.392 22:44:00 -- common/autotest_common.sh@1142 -- $ return 0 00:03:24.392 22:44:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:24.392 22:44:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:24.392 22:44:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:24.392 22:44:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:24.392 22:44:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:24.392 22:44:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:24.392 22:44:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:24.392 22:44:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:24.652 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:24.652 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.652 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.912 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:25.481 Using 'verbs' RDMA provider 00:03:45.036 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:03.142 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:03.142 Creating mk/config.mk...done. 00:04:03.142 Creating mk/cc.flags.mk...done. 00:04:03.142 Type 'make' to build. 00:04:03.142 22:44:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:04:03.142 22:44:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:03.142 22:44:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:03.142 22:44:37 -- common/autotest_common.sh@10 -- $ set +x 00:04:03.142 ************************************ 00:04:03.142 START TEST make 00:04:03.142 ************************************ 00:04:03.142 22:44:37 make -- common/autotest_common.sh@1123 -- $ make -j48 00:04:03.142 make[1]: Nothing to be done for 'all'. 00:04:04.093 The Meson build system 00:04:04.093 Version: 1.3.1 00:04:04.093 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:04.093 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:04.093 Build type: native build 00:04:04.093 Project name: libvfio-user 00:04:04.093 Project version: 0.0.1 00:04:04.094 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:04.094 C linker for the host machine: gcc ld.bfd 2.39-16 00:04:04.094 Host machine cpu family: x86_64 00:04:04.094 Host machine cpu: x86_64 00:04:04.094 Run-time dependency threads found: YES 00:04:04.094 Library dl found: YES 00:04:04.094 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:04.094 Run-time dependency json-c found: YES 0.17 00:04:04.094 Run-time dependency cmocka found: YES 1.1.7 00:04:04.094 Program pytest-3 found: NO 00:04:04.094 Program flake8 found: NO 00:04:04.094 Program misspell-fixer found: NO 00:04:04.094 Program restructuredtext-lint found: NO 00:04:04.094 Program valgrind found: YES (/usr/bin/valgrind) 00:04:04.094 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:04.094 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:04.094 Compiler for C supports arguments -Wwrite-strings: YES 00:04:04.094 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:04.094 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:04.094 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:04.094 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:04.094 Build targets in project: 8 00:04:04.094 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:04.094 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:04.094 00:04:04.094 libvfio-user 0.0.1 00:04:04.094 00:04:04.094 User defined options 00:04:04.094 buildtype : debug 00:04:04.094 default_library: shared 00:04:04.094 libdir : /usr/local/lib 00:04:04.094 00:04:04.094 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:05.051 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:05.051 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:05.051 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:05.051 [3/37] Compiling C object samples/null.p/null.c.o 00:04:05.051 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:05.051 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:05.317 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:05.317 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:05.317 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:05.317 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:05.317 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:05.317 [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:05.317 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:05.317 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:05.317 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:05.317 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:05.317 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:05.317 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:05.317 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:05.317 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:05.317 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:05.317 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:05.317 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:05.317 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:05.317 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:05.317 [25/37] Compiling C object samples/server.p/server.c.o 00:04:05.317 [26/37] Compiling C object samples/client.p/client.c.o 00:04:05.579 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:05.579 [28/37] Linking target samples/client 00:04:05.579 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:05.579 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:05.851 [31/37] Linking target test/unit_tests 00:04:05.851 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:05.851 [33/37] Linking target samples/gpio-pci-idio-16 00:04:05.851 [34/37] Linking target samples/lspci 00:04:05.851 [35/37] Linking target samples/null 00:04:05.851 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:05.851 [37/37] Linking target samples/server 00:04:05.851 INFO: autodetecting backend as ninja 00:04:05.851 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:06.113 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:06.691 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:06.691 ninja: no work to do. 00:04:33.291 CC lib/log/log.o 00:04:33.291 CC lib/ut/ut.o 00:04:33.291 CC lib/log/log_deprecated.o 00:04:33.291 CC lib/log/log_flags.o 00:04:33.291 CC lib/ut_mock/mock.o 00:04:33.291 LIB libspdk_ut.a 00:04:33.291 SO libspdk_ut.so.2.0 00:04:33.291 SYMLINK libspdk_ut.so 00:04:33.291 LIB libspdk_log.a 00:04:33.291 LIB libspdk_ut_mock.a 00:04:33.291 SO libspdk_log.so.7.0 00:04:33.291 SO libspdk_ut_mock.so.6.0 00:04:33.291 SYMLINK libspdk_log.so 00:04:33.291 SYMLINK libspdk_ut_mock.so 00:04:33.291 CXX lib/trace_parser/trace.o 00:04:33.291 CC lib/dma/dma.o 00:04:33.291 CC lib/util/base64.o 00:04:33.291 CC lib/util/bit_array.o 00:04:33.291 CC lib/util/cpuset.o 00:04:33.291 CC lib/util/crc16.o 00:04:33.291 CC lib/util/crc32.o 00:04:33.291 CC lib/util/crc32_ieee.o 00:04:33.291 CC lib/util/crc32c.o 00:04:33.291 CC lib/util/crc64.o 00:04:33.291 CC lib/util/dif.o 00:04:33.291 CC lib/util/fd_group.o 00:04:33.291 CC lib/util/fd.o 00:04:33.291 CC lib/util/file.o 00:04:33.291 CC lib/util/hexlify.o 00:04:33.291 CC lib/util/iov.o 00:04:33.291 CC lib/util/math.o 00:04:33.291 CC lib/util/net.o 00:04:33.291 CC lib/util/pipe.o 00:04:33.291 CC lib/util/strerror_tls.o 00:04:33.291 CC lib/util/string.o 00:04:33.291 CC lib/util/uuid.o 00:04:33.291 CC lib/util/xor.o 00:04:33.291 CC lib/util/zipf.o 00:04:33.291 CC lib/ioat/ioat.o 00:04:33.291 CC lib/vfio_user/host/vfio_user_pci.o 00:04:33.291 CC lib/vfio_user/host/vfio_user.o 00:04:33.291 LIB libspdk_dma.a 00:04:33.291 SO libspdk_dma.so.4.0 00:04:33.291 SYMLINK libspdk_dma.so 00:04:33.291 LIB libspdk_ioat.a 00:04:33.291 SO libspdk_ioat.so.7.0 00:04:33.291 LIB libspdk_vfio_user.a 00:04:33.291 SYMLINK libspdk_ioat.so 00:04:33.291 SO libspdk_vfio_user.so.5.0 00:04:33.291 SYMLINK libspdk_vfio_user.so 00:04:33.291 LIB libspdk_util.a 00:04:33.291 SO libspdk_util.so.10.0 00:04:33.291 SYMLINK libspdk_util.so 00:04:33.291 LIB libspdk_trace_parser.a 00:04:33.291 SO libspdk_trace_parser.so.5.0 00:04:33.291 CC lib/env_dpdk/env.o 00:04:33.291 CC lib/env_dpdk/memory.o 00:04:33.291 CC lib/env_dpdk/pci.o 00:04:33.291 CC lib/env_dpdk/init.o 00:04:33.291 CC lib/conf/conf.o 00:04:33.291 CC lib/env_dpdk/threads.o 00:04:33.291 CC lib/env_dpdk/pci_ioat.o 00:04:33.291 CC lib/rdma_utils/rdma_utils.o 00:04:33.291 CC lib/env_dpdk/pci_virtio.o 00:04:33.291 CC lib/env_dpdk/pci_vmd.o 00:04:33.291 CC lib/vmd/vmd.o 00:04:33.291 CC lib/vmd/led.o 00:04:33.291 CC lib/env_dpdk/pci_idxd.o 00:04:33.291 CC lib/env_dpdk/pci_event.o 00:04:33.291 CC lib/env_dpdk/sigbus_handler.o 00:04:33.291 CC lib/env_dpdk/pci_dpdk.o 00:04:33.291 CC lib/rdma_provider/common.o 00:04:33.291 CC lib/idxd/idxd.o 00:04:33.291 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:33.291 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:33.291 CC lib/idxd/idxd_user.o 00:04:33.291 CC lib/idxd/idxd_kernel.o 00:04:33.291 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:33.291 CC lib/json/json_parse.o 00:04:33.291 CC lib/json/json_util.o 00:04:33.291 CC lib/json/json_write.o 00:04:33.291 SYMLINK libspdk_trace_parser.so 00:04:33.291 LIB libspdk_conf.a 00:04:33.291 SO libspdk_conf.so.6.0 00:04:33.291 LIB libspdk_rdma_provider.a 00:04:33.291 SYMLINK libspdk_conf.so 00:04:33.291 SO libspdk_rdma_provider.so.6.0 00:04:33.551 SYMLINK libspdk_rdma_provider.so 00:04:33.551 LIB libspdk_rdma_utils.a 00:04:33.551 SO libspdk_rdma_utils.so.1.0 00:04:33.551 SYMLINK libspdk_rdma_utils.so 00:04:33.551 LIB libspdk_json.a 00:04:33.551 SO libspdk_json.so.6.0 00:04:33.811 SYMLINK libspdk_json.so 00:04:33.811 LIB libspdk_idxd.a 00:04:33.811 SO libspdk_idxd.so.12.0 00:04:33.811 SYMLINK libspdk_idxd.so 00:04:34.099 CC lib/jsonrpc/jsonrpc_server.o 00:04:34.099 CC lib/jsonrpc/jsonrpc_client.o 00:04:34.099 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:34.099 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:34.099 LIB libspdk_vmd.a 00:04:34.099 SO libspdk_vmd.so.6.0 00:04:34.359 SYMLINK libspdk_vmd.so 00:04:34.618 LIB libspdk_jsonrpc.a 00:04:34.618 SO libspdk_jsonrpc.so.6.0 00:04:34.618 SYMLINK libspdk_jsonrpc.so 00:04:34.877 CC lib/rpc/rpc.o 00:04:35.447 LIB libspdk_rpc.a 00:04:35.447 SO libspdk_rpc.so.6.0 00:04:35.447 SYMLINK libspdk_rpc.so 00:04:35.707 CC lib/trace/trace.o 00:04:35.707 CC lib/trace/trace_flags.o 00:04:35.707 CC lib/trace/trace_rpc.o 00:04:35.707 CC lib/keyring/keyring.o 00:04:35.707 CC lib/keyring/keyring_rpc.o 00:04:35.707 CC lib/notify/notify.o 00:04:35.707 CC lib/notify/notify_rpc.o 00:04:35.966 LIB libspdk_keyring.a 00:04:35.966 LIB libspdk_notify.a 00:04:36.228 SO libspdk_keyring.so.1.0 00:04:36.228 SO libspdk_notify.so.6.0 00:04:36.228 SYMLINK libspdk_keyring.so 00:04:36.228 SYMLINK libspdk_notify.so 00:04:36.228 LIB libspdk_trace.a 00:04:36.228 SO libspdk_trace.so.10.0 00:04:36.489 SYMLINK libspdk_trace.so 00:04:36.748 CC lib/sock/sock.o 00:04:36.748 CC lib/sock/sock_rpc.o 00:04:36.748 CC lib/thread/iobuf.o 00:04:36.748 CC lib/thread/thread.o 00:04:36.748 LIB libspdk_env_dpdk.a 00:04:36.748 SO libspdk_env_dpdk.so.15.0 00:04:37.006 SYMLINK libspdk_env_dpdk.so 00:04:37.265 LIB libspdk_sock.a 00:04:37.265 SO libspdk_sock.so.10.0 00:04:37.523 SYMLINK libspdk_sock.so 00:04:37.782 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:37.782 CC lib/nvme/nvme_fabric.o 00:04:37.782 CC lib/nvme/nvme_ctrlr.o 00:04:37.782 CC lib/nvme/nvme_ns_cmd.o 00:04:37.782 CC lib/nvme/nvme_ns.o 00:04:37.782 CC lib/nvme/nvme_pcie_common.o 00:04:37.782 CC lib/nvme/nvme_pcie.o 00:04:37.782 CC lib/nvme/nvme_qpair.o 00:04:37.782 CC lib/nvme/nvme.o 00:04:37.782 CC lib/nvme/nvme_quirks.o 00:04:37.782 CC lib/nvme/nvme_transport.o 00:04:37.782 CC lib/nvme/nvme_discovery.o 00:04:37.782 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:37.782 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:37.782 CC lib/nvme/nvme_tcp.o 00:04:37.782 CC lib/nvme/nvme_opal.o 00:04:37.782 CC lib/nvme/nvme_io_msg.o 00:04:37.782 CC lib/nvme/nvme_poll_group.o 00:04:37.782 CC lib/nvme/nvme_zns.o 00:04:37.782 CC lib/nvme/nvme_stubs.o 00:04:37.782 CC lib/nvme/nvme_auth.o 00:04:37.782 CC lib/nvme/nvme_cuse.o 00:04:37.782 CC lib/nvme/nvme_vfio_user.o 00:04:37.782 CC lib/nvme/nvme_rdma.o 00:04:39.163 LIB libspdk_thread.a 00:04:39.163 SO libspdk_thread.so.10.1 00:04:39.163 SYMLINK libspdk_thread.so 00:04:39.163 CC lib/vfu_tgt/tgt_endpoint.o 00:04:39.163 CC lib/vfu_tgt/tgt_rpc.o 00:04:39.163 CC lib/virtio/virtio.o 00:04:39.163 CC lib/virtio/virtio_vhost_user.o 00:04:39.163 CC lib/virtio/virtio_vfio_user.o 00:04:39.163 CC lib/virtio/virtio_pci.o 00:04:39.163 CC lib/accel/accel.o 00:04:39.163 CC lib/accel/accel_rpc.o 00:04:39.163 CC lib/accel/accel_sw.o 00:04:39.163 CC lib/blob/blobstore.o 00:04:39.163 CC lib/init/json_config.o 00:04:39.163 CC lib/blob/request.o 00:04:39.163 CC lib/init/subsystem.o 00:04:39.163 CC lib/blob/zeroes.o 00:04:39.163 CC lib/init/subsystem_rpc.o 00:04:39.163 CC lib/blob/blob_bs_dev.o 00:04:39.163 CC lib/init/rpc.o 00:04:39.730 LIB libspdk_vfu_tgt.a 00:04:39.730 LIB libspdk_init.a 00:04:39.730 SO libspdk_vfu_tgt.so.3.0 00:04:39.730 SO libspdk_init.so.5.0 00:04:39.730 LIB libspdk_virtio.a 00:04:39.730 SYMLINK libspdk_vfu_tgt.so 00:04:39.730 SO libspdk_virtio.so.7.0 00:04:39.730 SYMLINK libspdk_init.so 00:04:39.730 SYMLINK libspdk_virtio.so 00:04:39.988 CC lib/event/app.o 00:04:39.989 CC lib/event/reactor.o 00:04:39.989 CC lib/event/log_rpc.o 00:04:39.989 CC lib/event/app_rpc.o 00:04:39.989 CC lib/event/scheduler_static.o 00:04:40.929 LIB libspdk_nvme.a 00:04:40.929 LIB libspdk_event.a 00:04:40.929 LIB libspdk_accel.a 00:04:40.929 SO libspdk_event.so.14.0 00:04:40.929 SO libspdk_nvme.so.13.1 00:04:40.929 SO libspdk_accel.so.16.0 00:04:41.189 SYMLINK libspdk_event.so 00:04:41.189 SYMLINK libspdk_accel.so 00:04:41.450 CC lib/bdev/bdev.o 00:04:41.450 CC lib/bdev/bdev_rpc.o 00:04:41.450 CC lib/bdev/bdev_zone.o 00:04:41.450 CC lib/bdev/scsi_nvme.o 00:04:41.450 CC lib/bdev/part.o 00:04:41.711 SYMLINK libspdk_nvme.so 00:04:45.911 LIB libspdk_blob.a 00:04:45.911 SO libspdk_blob.so.11.0 00:04:45.911 SYMLINK libspdk_blob.so 00:04:46.170 CC lib/lvol/lvol.o 00:04:46.170 CC lib/blobfs/blobfs.o 00:04:46.170 CC lib/blobfs/tree.o 00:04:47.108 LIB libspdk_bdev.a 00:04:47.369 SO libspdk_bdev.so.16.0 00:04:47.369 LIB libspdk_blobfs.a 00:04:47.369 SO libspdk_blobfs.so.10.0 00:04:47.369 SYMLINK libspdk_blobfs.so 00:04:47.369 SYMLINK libspdk_bdev.so 00:04:47.635 LIB libspdk_lvol.a 00:04:47.635 SO libspdk_lvol.so.10.0 00:04:47.635 CC lib/ftl/ftl_core.o 00:04:47.635 CC lib/ftl/ftl_init.o 00:04:47.635 CC lib/ftl/ftl_debug.o 00:04:47.635 CC lib/ftl/ftl_io.o 00:04:47.635 CC lib/ftl/ftl_layout.o 00:04:47.635 CC lib/ftl/ftl_sb.o 00:04:47.635 CC lib/ftl/ftl_l2p_flat.o 00:04:47.635 CC lib/ftl/ftl_nv_cache.o 00:04:47.635 CC lib/ftl/ftl_l2p.o 00:04:47.635 SYMLINK libspdk_lvol.so 00:04:47.635 CC lib/ftl/ftl_band_ops.o 00:04:47.635 CC lib/ftl/ftl_band.o 00:04:47.635 CC lib/ftl/ftl_writer.o 00:04:47.635 CC lib/ftl/ftl_rq.o 00:04:47.635 CC lib/ftl/ftl_reloc.o 00:04:47.635 CC lib/ftl/ftl_l2p_cache.o 00:04:47.635 CC lib/ftl/ftl_p2l.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:47.635 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:47.635 CC lib/ftl/utils/ftl_conf.o 00:04:47.635 CC lib/nbd/nbd.o 00:04:47.635 CC lib/ftl/utils/ftl_md.o 00:04:47.635 CC lib/ftl/utils/ftl_mempool.o 00:04:47.636 CC lib/nbd/nbd_rpc.o 00:04:47.636 CC lib/ftl/utils/ftl_bitmap.o 00:04:47.636 CC lib/ftl/utils/ftl_property.o 00:04:47.636 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:47.636 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:47.636 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:47.636 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:47.636 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:47.636 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:47.636 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:47.636 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:47.636 CC lib/scsi/dev.o 00:04:47.636 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:47.636 CC lib/ublk/ublk.o 00:04:47.636 CC lib/nvmf/ctrlr.o 00:04:48.208 CC lib/nvmf/ctrlr_discovery.o 00:04:48.208 CC lib/scsi/lun.o 00:04:48.208 CC lib/nvmf/ctrlr_bdev.o 00:04:48.208 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:48.208 CC lib/ublk/ublk_rpc.o 00:04:48.208 CC lib/scsi/port.o 00:04:48.208 CC lib/nvmf/subsystem.o 00:04:48.208 CC lib/nvmf/nvmf.o 00:04:48.208 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:48.208 CC lib/scsi/scsi.o 00:04:48.208 CC lib/scsi/scsi_bdev.o 00:04:48.208 CC lib/nvmf/nvmf_rpc.o 00:04:48.208 CC lib/ftl/base/ftl_base_dev.o 00:04:48.208 CC lib/scsi/scsi_pr.o 00:04:48.208 CC lib/nvmf/transport.o 00:04:48.208 CC lib/scsi/scsi_rpc.o 00:04:48.208 CC lib/scsi/task.o 00:04:48.208 CC lib/nvmf/tcp.o 00:04:48.208 CC lib/ftl/base/ftl_base_bdev.o 00:04:48.208 CC lib/nvmf/stubs.o 00:04:48.208 CC lib/nvmf/mdns_server.o 00:04:48.208 CC lib/ftl/ftl_trace.o 00:04:48.208 CC lib/nvmf/vfio_user.o 00:04:48.208 CC lib/nvmf/rdma.o 00:04:48.208 CC lib/nvmf/auth.o 00:04:48.776 LIB libspdk_nbd.a 00:04:48.776 LIB libspdk_ublk.a 00:04:48.776 SO libspdk_nbd.so.7.0 00:04:48.776 SO libspdk_ublk.so.3.0 00:04:48.776 SYMLINK libspdk_nbd.so 00:04:48.776 SYMLINK libspdk_ublk.so 00:04:48.776 LIB libspdk_scsi.a 00:04:48.776 SO libspdk_scsi.so.9.0 00:04:49.035 SYMLINK libspdk_scsi.so 00:04:49.294 CC lib/vhost/vhost.o 00:04:49.294 CC lib/vhost/vhost_scsi.o 00:04:49.294 CC lib/vhost/vhost_rpc.o 00:04:49.294 CC lib/vhost/rte_vhost_user.o 00:04:49.294 CC lib/vhost/vhost_blk.o 00:04:49.294 CC lib/iscsi/conn.o 00:04:49.294 CC lib/iscsi/iscsi.o 00:04:49.294 CC lib/iscsi/init_grp.o 00:04:49.294 CC lib/iscsi/param.o 00:04:49.294 CC lib/iscsi/md5.o 00:04:49.294 CC lib/iscsi/tgt_node.o 00:04:49.294 CC lib/iscsi/portal_grp.o 00:04:49.294 CC lib/iscsi/iscsi_subsystem.o 00:04:49.294 CC lib/iscsi/iscsi_rpc.o 00:04:49.294 CC lib/iscsi/task.o 00:04:49.294 LIB libspdk_ftl.a 00:04:49.552 SO libspdk_ftl.so.9.0 00:04:49.812 SYMLINK libspdk_ftl.so 00:04:52.357 LIB libspdk_vhost.a 00:04:52.357 SO libspdk_vhost.so.8.0 00:04:52.357 SYMLINK libspdk_vhost.so 00:04:52.357 LIB libspdk_iscsi.a 00:04:52.357 SO libspdk_iscsi.so.8.0 00:04:52.617 SYMLINK libspdk_iscsi.so 00:04:52.877 LIB libspdk_nvmf.a 00:04:53.137 SO libspdk_nvmf.so.19.0 00:04:53.396 SYMLINK libspdk_nvmf.so 00:04:53.965 CC module/env_dpdk/env_dpdk_rpc.o 00:04:53.965 CC module/vfu_device/vfu_virtio_scsi.o 00:04:53.965 CC module/vfu_device/vfu_virtio.o 00:04:53.965 CC module/vfu_device/vfu_virtio_blk.o 00:04:53.965 CC module/vfu_device/vfu_virtio_rpc.o 00:04:53.965 CC module/accel/dsa/accel_dsa.o 00:04:53.965 CC module/accel/dsa/accel_dsa_rpc.o 00:04:53.965 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:53.965 CC module/scheduler/gscheduler/gscheduler.o 00:04:53.965 CC module/accel/ioat/accel_ioat.o 00:04:53.965 CC module/accel/ioat/accel_ioat_rpc.o 00:04:53.965 CC module/accel/iaa/accel_iaa.o 00:04:53.965 CC module/accel/iaa/accel_iaa_rpc.o 00:04:53.965 CC module/accel/error/accel_error.o 00:04:53.965 CC module/accel/error/accel_error_rpc.o 00:04:53.965 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:53.965 CC module/blob/bdev/blob_bdev.o 00:04:53.965 CC module/keyring/linux/keyring.o 00:04:53.965 CC module/keyring/linux/keyring_rpc.o 00:04:53.965 CC module/sock/posix/posix.o 00:04:53.965 CC module/keyring/file/keyring.o 00:04:53.965 CC module/keyring/file/keyring_rpc.o 00:04:53.965 LIB libspdk_env_dpdk_rpc.a 00:04:53.965 SO libspdk_env_dpdk_rpc.so.6.0 00:04:54.247 LIB libspdk_keyring_linux.a 00:04:54.247 SYMLINK libspdk_env_dpdk_rpc.so 00:04:54.247 SO libspdk_keyring_linux.so.1.0 00:04:54.247 LIB libspdk_accel_error.a 00:04:54.247 LIB libspdk_scheduler_gscheduler.a 00:04:54.247 LIB libspdk_keyring_file.a 00:04:54.247 LIB libspdk_scheduler_dpdk_governor.a 00:04:54.247 LIB libspdk_scheduler_dynamic.a 00:04:54.247 SO libspdk_scheduler_gscheduler.so.4.0 00:04:54.247 SO libspdk_accel_error.so.2.0 00:04:54.247 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:54.247 SO libspdk_keyring_file.so.1.0 00:04:54.247 SO libspdk_scheduler_dynamic.so.4.0 00:04:54.247 LIB libspdk_accel_ioat.a 00:04:54.247 SYMLINK libspdk_keyring_linux.so 00:04:54.247 LIB libspdk_accel_iaa.a 00:04:54.247 SYMLINK libspdk_scheduler_gscheduler.so 00:04:54.247 SYMLINK libspdk_accel_error.so 00:04:54.247 LIB libspdk_accel_dsa.a 00:04:54.247 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:54.247 SO libspdk_accel_ioat.so.6.0 00:04:54.247 SO libspdk_accel_iaa.so.3.0 00:04:54.247 SO libspdk_accel_dsa.so.5.0 00:04:54.247 SYMLINK libspdk_scheduler_dynamic.so 00:04:54.247 SYMLINK libspdk_keyring_file.so 00:04:54.247 SYMLINK libspdk_accel_iaa.so 00:04:54.247 SYMLINK libspdk_accel_ioat.so 00:04:54.247 SYMLINK libspdk_accel_dsa.so 00:04:54.247 LIB libspdk_blob_bdev.a 00:04:54.535 SO libspdk_blob_bdev.so.11.0 00:04:54.535 SYMLINK libspdk_blob_bdev.so 00:04:54.799 CC module/bdev/error/vbdev_error.o 00:04:54.799 CC module/bdev/error/vbdev_error_rpc.o 00:04:54.799 CC module/bdev/gpt/gpt.o 00:04:54.799 CC module/bdev/nvme/bdev_nvme.o 00:04:54.799 CC module/bdev/gpt/vbdev_gpt.o 00:04:54.799 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:54.799 CC module/bdev/nvme/nvme_rpc.o 00:04:54.799 CC module/bdev/nvme/bdev_mdns_client.o 00:04:54.799 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:54.799 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:54.799 CC module/bdev/nvme/vbdev_opal.o 00:04:54.799 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:54.799 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:54.799 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:54.799 CC module/bdev/passthru/vbdev_passthru.o 00:04:54.799 CC module/bdev/raid/bdev_raid.o 00:04:54.799 CC module/bdev/raid/bdev_raid_rpc.o 00:04:54.799 CC module/bdev/raid/raid0.o 00:04:54.800 CC module/blobfs/bdev/blobfs_bdev.o 00:04:54.800 CC module/bdev/raid/bdev_raid_sb.o 00:04:54.800 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:54.800 CC module/bdev/raid/raid1.o 00:04:54.800 CC module/bdev/null/bdev_null_rpc.o 00:04:54.800 CC module/bdev/raid/concat.o 00:04:54.800 CC module/bdev/null/bdev_null.o 00:04:54.800 CC module/bdev/split/vbdev_split_rpc.o 00:04:54.800 CC module/bdev/split/vbdev_split.o 00:04:54.800 CC module/bdev/malloc/bdev_malloc.o 00:04:54.800 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:54.800 CC module/bdev/delay/vbdev_delay.o 00:04:54.800 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:54.800 CC module/bdev/lvol/vbdev_lvol.o 00:04:54.800 CC module/bdev/aio/bdev_aio.o 00:04:54.800 CC module/bdev/aio/bdev_aio_rpc.o 00:04:54.800 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:54.800 CC module/bdev/iscsi/bdev_iscsi.o 00:04:54.800 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:54.800 CC module/bdev/ftl/bdev_ftl.o 00:04:54.800 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:54.800 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:54.800 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:54.800 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:54.800 LIB libspdk_vfu_device.a 00:04:55.059 LIB libspdk_sock_posix.a 00:04:55.059 SO libspdk_vfu_device.so.3.0 00:04:55.059 SO libspdk_sock_posix.so.6.0 00:04:55.059 LIB libspdk_bdev_error.a 00:04:55.059 SYMLINK libspdk_vfu_device.so 00:04:55.059 SO libspdk_bdev_error.so.6.0 00:04:55.059 LIB libspdk_blobfs_bdev.a 00:04:55.059 SYMLINK libspdk_sock_posix.so 00:04:55.319 SO libspdk_blobfs_bdev.so.6.0 00:04:55.319 SYMLINK libspdk_bdev_error.so 00:04:55.319 SYMLINK libspdk_blobfs_bdev.so 00:04:55.319 LIB libspdk_bdev_aio.a 00:04:55.319 LIB libspdk_bdev_split.a 00:04:55.319 LIB libspdk_bdev_passthru.a 00:04:55.319 SO libspdk_bdev_aio.so.6.0 00:04:55.319 SO libspdk_bdev_split.so.6.0 00:04:55.319 SO libspdk_bdev_passthru.so.6.0 00:04:55.319 LIB libspdk_bdev_delay.a 00:04:55.319 LIB libspdk_bdev_iscsi.a 00:04:55.319 SO libspdk_bdev_delay.so.6.0 00:04:55.319 SO libspdk_bdev_iscsi.so.6.0 00:04:55.319 LIB libspdk_bdev_null.a 00:04:55.319 LIB libspdk_bdev_gpt.a 00:04:55.319 LIB libspdk_bdev_malloc.a 00:04:55.579 SYMLINK libspdk_bdev_split.so 00:04:55.579 SYMLINK libspdk_bdev_passthru.so 00:04:55.579 LIB libspdk_bdev_ftl.a 00:04:55.579 SO libspdk_bdev_null.so.6.0 00:04:55.579 SYMLINK libspdk_bdev_aio.so 00:04:55.579 SO libspdk_bdev_malloc.so.6.0 00:04:55.579 SYMLINK libspdk_bdev_delay.so 00:04:55.579 SO libspdk_bdev_gpt.so.6.0 00:04:55.579 SYMLINK libspdk_bdev_iscsi.so 00:04:55.579 LIB libspdk_bdev_lvol.a 00:04:55.579 SO libspdk_bdev_ftl.so.6.0 00:04:55.579 SYMLINK libspdk_bdev_null.so 00:04:55.579 SYMLINK libspdk_bdev_malloc.so 00:04:55.579 SO libspdk_bdev_lvol.so.6.0 00:04:55.579 LIB libspdk_bdev_zone_block.a 00:04:55.579 SYMLINK libspdk_bdev_ftl.so 00:04:55.579 SYMLINK libspdk_bdev_gpt.so 00:04:55.579 SO libspdk_bdev_zone_block.so.6.0 00:04:55.579 SYMLINK libspdk_bdev_lvol.so 00:04:55.579 SYMLINK libspdk_bdev_zone_block.so 00:04:55.840 LIB libspdk_bdev_virtio.a 00:04:56.100 SO libspdk_bdev_virtio.so.6.0 00:04:56.100 SYMLINK libspdk_bdev_virtio.so 00:04:56.100 LIB libspdk_bdev_raid.a 00:04:56.100 SO libspdk_bdev_raid.so.6.0 00:04:56.360 SYMLINK libspdk_bdev_raid.so 00:05:00.562 LIB libspdk_bdev_nvme.a 00:05:00.562 SO libspdk_bdev_nvme.so.7.0 00:05:00.562 SYMLINK libspdk_bdev_nvme.so 00:05:00.822 CC module/event/subsystems/keyring/keyring.o 00:05:00.822 CC module/event/subsystems/iobuf/iobuf.o 00:05:00.822 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:00.822 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:00.822 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:00.822 CC module/event/subsystems/scheduler/scheduler.o 00:05:00.822 CC module/event/subsystems/sock/sock.o 00:05:00.822 CC module/event/subsystems/vmd/vmd.o 00:05:00.822 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:01.082 LIB libspdk_event_vhost_blk.a 00:05:01.082 LIB libspdk_event_scheduler.a 00:05:01.082 LIB libspdk_event_iobuf.a 00:05:01.082 SO libspdk_event_vhost_blk.so.3.0 00:05:01.082 SO libspdk_event_scheduler.so.4.0 00:05:01.082 SO libspdk_event_iobuf.so.3.0 00:05:01.082 SYMLINK libspdk_event_vhost_blk.so 00:05:01.082 SYMLINK libspdk_event_scheduler.so 00:05:01.082 LIB libspdk_event_keyring.a 00:05:01.082 SYMLINK libspdk_event_iobuf.so 00:05:01.082 LIB libspdk_event_vfu_tgt.a 00:05:01.082 LIB libspdk_event_sock.a 00:05:01.343 LIB libspdk_event_vmd.a 00:05:01.343 SO libspdk_event_keyring.so.1.0 00:05:01.343 SO libspdk_event_vfu_tgt.so.3.0 00:05:01.343 SO libspdk_event_sock.so.5.0 00:05:01.343 SO libspdk_event_vmd.so.6.0 00:05:01.343 SYMLINK libspdk_event_keyring.so 00:05:01.343 SYMLINK libspdk_event_sock.so 00:05:01.343 SYMLINK libspdk_event_vfu_tgt.so 00:05:01.343 SYMLINK libspdk_event_vmd.so 00:05:01.343 CC module/event/subsystems/accel/accel.o 00:05:01.603 LIB libspdk_event_accel.a 00:05:01.603 SO libspdk_event_accel.so.6.0 00:05:01.603 SYMLINK libspdk_event_accel.so 00:05:01.863 CC module/event/subsystems/bdev/bdev.o 00:05:02.123 LIB libspdk_event_bdev.a 00:05:02.123 SO libspdk_event_bdev.so.6.0 00:05:02.383 SYMLINK libspdk_event_bdev.so 00:05:02.643 CC module/event/subsystems/ublk/ublk.o 00:05:02.643 CC module/event/subsystems/scsi/scsi.o 00:05:02.643 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:02.643 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:02.643 CC module/event/subsystems/nbd/nbd.o 00:05:02.643 LIB libspdk_event_nbd.a 00:05:02.643 LIB libspdk_event_scsi.a 00:05:02.643 SO libspdk_event_nbd.so.6.0 00:05:02.903 SO libspdk_event_scsi.so.6.0 00:05:02.903 SYMLINK libspdk_event_nbd.so 00:05:02.903 LIB libspdk_event_ublk.a 00:05:02.903 SYMLINK libspdk_event_scsi.so 00:05:02.903 SO libspdk_event_ublk.so.3.0 00:05:02.903 SYMLINK libspdk_event_ublk.so 00:05:03.163 LIB libspdk_event_nvmf.a 00:05:03.163 SO libspdk_event_nvmf.so.6.0 00:05:03.163 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:03.163 CC module/event/subsystems/iscsi/iscsi.o 00:05:03.163 SYMLINK libspdk_event_nvmf.so 00:05:03.424 LIB libspdk_event_vhost_scsi.a 00:05:03.424 SO libspdk_event_vhost_scsi.so.3.0 00:05:03.683 LIB libspdk_event_iscsi.a 00:05:03.683 SYMLINK libspdk_event_vhost_scsi.so 00:05:03.683 SO libspdk_event_iscsi.so.6.0 00:05:03.683 SYMLINK libspdk_event_iscsi.so 00:05:03.943 SO libspdk.so.6.0 00:05:03.943 SYMLINK libspdk.so 00:05:04.208 TEST_HEADER include/spdk/accel.h 00:05:04.208 TEST_HEADER include/spdk/assert.h 00:05:04.208 TEST_HEADER include/spdk/barrier.h 00:05:04.208 TEST_HEADER include/spdk/accel_module.h 00:05:04.208 TEST_HEADER include/spdk/base64.h 00:05:04.208 CC app/trace_record/trace_record.o 00:05:04.208 TEST_HEADER include/spdk/bdev.h 00:05:04.208 TEST_HEADER include/spdk/bdev_module.h 00:05:04.208 TEST_HEADER include/spdk/bdev_zone.h 00:05:04.208 TEST_HEADER include/spdk/bit_array.h 00:05:04.208 TEST_HEADER include/spdk/bit_pool.h 00:05:04.208 TEST_HEADER include/spdk/blob_bdev.h 00:05:04.208 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:04.208 CC test/rpc_client/rpc_client_test.o 00:05:04.208 TEST_HEADER include/spdk/blobfs.h 00:05:04.208 TEST_HEADER include/spdk/blob.h 00:05:04.208 TEST_HEADER include/spdk/conf.h 00:05:04.208 TEST_HEADER include/spdk/config.h 00:05:04.208 TEST_HEADER include/spdk/crc16.h 00:05:04.208 TEST_HEADER include/spdk/cpuset.h 00:05:04.208 TEST_HEADER include/spdk/crc32.h 00:05:04.208 TEST_HEADER include/spdk/crc64.h 00:05:04.208 TEST_HEADER include/spdk/dma.h 00:05:04.208 TEST_HEADER include/spdk/dif.h 00:05:04.208 TEST_HEADER include/spdk/endian.h 00:05:04.208 TEST_HEADER include/spdk/env_dpdk.h 00:05:04.208 TEST_HEADER include/spdk/env.h 00:05:04.208 CC app/spdk_nvme_perf/perf.o 00:05:04.208 TEST_HEADER include/spdk/event.h 00:05:04.208 TEST_HEADER include/spdk/fd_group.h 00:05:04.208 CXX app/trace/trace.o 00:05:04.208 CC app/spdk_top/spdk_top.o 00:05:04.208 TEST_HEADER include/spdk/fd.h 00:05:04.208 TEST_HEADER include/spdk/file.h 00:05:04.208 TEST_HEADER include/spdk/ftl.h 00:05:04.208 TEST_HEADER include/spdk/gpt_spec.h 00:05:04.208 TEST_HEADER include/spdk/histogram_data.h 00:05:04.208 TEST_HEADER include/spdk/hexlify.h 00:05:04.208 TEST_HEADER include/spdk/idxd.h 00:05:04.208 TEST_HEADER include/spdk/init.h 00:05:04.208 TEST_HEADER include/spdk/idxd_spec.h 00:05:04.208 CC app/spdk_nvme_identify/identify.o 00:05:04.208 TEST_HEADER include/spdk/ioat.h 00:05:04.208 CC app/spdk_lspci/spdk_lspci.o 00:05:04.208 TEST_HEADER include/spdk/iscsi_spec.h 00:05:04.208 TEST_HEADER include/spdk/json.h 00:05:04.208 TEST_HEADER include/spdk/ioat_spec.h 00:05:04.208 TEST_HEADER include/spdk/keyring.h 00:05:04.208 CC app/spdk_nvme_discover/discovery_aer.o 00:05:04.208 TEST_HEADER include/spdk/jsonrpc.h 00:05:04.208 TEST_HEADER include/spdk/keyring_module.h 00:05:04.208 TEST_HEADER include/spdk/likely.h 00:05:04.208 TEST_HEADER include/spdk/log.h 00:05:04.208 TEST_HEADER include/spdk/lvol.h 00:05:04.208 TEST_HEADER include/spdk/mmio.h 00:05:04.208 TEST_HEADER include/spdk/memory.h 00:05:04.208 TEST_HEADER include/spdk/nbd.h 00:05:04.208 TEST_HEADER include/spdk/net.h 00:05:04.208 TEST_HEADER include/spdk/notify.h 00:05:04.208 TEST_HEADER include/spdk/nvme.h 00:05:04.208 TEST_HEADER include/spdk/nvme_intel.h 00:05:04.208 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:04.208 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:04.208 TEST_HEADER include/spdk/nvme_spec.h 00:05:04.208 TEST_HEADER include/spdk/nvme_zns.h 00:05:04.208 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:04.208 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:04.208 TEST_HEADER include/spdk/nvmf.h 00:05:04.208 TEST_HEADER include/spdk/nvmf_spec.h 00:05:04.208 TEST_HEADER include/spdk/nvmf_transport.h 00:05:04.208 TEST_HEADER include/spdk/opal.h 00:05:04.208 TEST_HEADER include/spdk/opal_spec.h 00:05:04.208 TEST_HEADER include/spdk/pipe.h 00:05:04.208 TEST_HEADER include/spdk/pci_ids.h 00:05:04.208 TEST_HEADER include/spdk/reduce.h 00:05:04.208 TEST_HEADER include/spdk/queue.h 00:05:04.208 TEST_HEADER include/spdk/scheduler.h 00:05:04.208 TEST_HEADER include/spdk/rpc.h 00:05:04.208 TEST_HEADER include/spdk/scsi_spec.h 00:05:04.208 TEST_HEADER include/spdk/scsi.h 00:05:04.208 TEST_HEADER include/spdk/stdinc.h 00:05:04.208 TEST_HEADER include/spdk/sock.h 00:05:04.208 TEST_HEADER include/spdk/string.h 00:05:04.208 TEST_HEADER include/spdk/trace.h 00:05:04.208 TEST_HEADER include/spdk/thread.h 00:05:04.208 TEST_HEADER include/spdk/trace_parser.h 00:05:04.208 TEST_HEADER include/spdk/tree.h 00:05:04.208 TEST_HEADER include/spdk/ublk.h 00:05:04.208 TEST_HEADER include/spdk/util.h 00:05:04.208 TEST_HEADER include/spdk/uuid.h 00:05:04.208 TEST_HEADER include/spdk/version.h 00:05:04.208 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:04.208 TEST_HEADER include/spdk/vhost.h 00:05:04.208 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:04.208 TEST_HEADER include/spdk/vmd.h 00:05:04.208 TEST_HEADER include/spdk/xor.h 00:05:04.208 TEST_HEADER include/spdk/zipf.h 00:05:04.208 CXX test/cpp_headers/accel.o 00:05:04.208 CXX test/cpp_headers/accel_module.o 00:05:04.208 CXX test/cpp_headers/assert.o 00:05:04.208 CXX test/cpp_headers/barrier.o 00:05:04.208 CXX test/cpp_headers/bdev.o 00:05:04.208 CXX test/cpp_headers/base64.o 00:05:04.208 CXX test/cpp_headers/bdev_module.o 00:05:04.208 CXX test/cpp_headers/bdev_zone.o 00:05:04.208 CXX test/cpp_headers/bit_array.o 00:05:04.208 CXX test/cpp_headers/bit_pool.o 00:05:04.208 CXX test/cpp_headers/blob_bdev.o 00:05:04.208 CXX test/cpp_headers/blobfs_bdev.o 00:05:04.208 CXX test/cpp_headers/blobfs.o 00:05:04.208 CXX test/cpp_headers/blob.o 00:05:04.208 CXX test/cpp_headers/conf.o 00:05:04.208 CXX test/cpp_headers/config.o 00:05:04.208 CXX test/cpp_headers/cpuset.o 00:05:04.208 CXX test/cpp_headers/crc16.o 00:05:04.208 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:04.208 CC app/iscsi_tgt/iscsi_tgt.o 00:05:04.208 CXX test/cpp_headers/crc32.o 00:05:04.208 CC app/spdk_dd/spdk_dd.o 00:05:04.208 CC app/nvmf_tgt/nvmf_main.o 00:05:04.208 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:04.208 CC test/thread/poller_perf/poller_perf.o 00:05:04.208 CC test/env/vtophys/vtophys.o 00:05:04.208 CC test/env/memory/memory_ut.o 00:05:04.208 CC test/app/histogram_perf/histogram_perf.o 00:05:04.208 CC test/app/jsoncat/jsoncat.o 00:05:04.209 CC test/app/stub/stub.o 00:05:04.209 CC test/env/pci/pci_ut.o 00:05:04.209 CC examples/util/zipf/zipf.o 00:05:04.471 CC app/fio/nvme/fio_plugin.o 00:05:04.472 CC examples/ioat/perf/perf.o 00:05:04.472 CC app/spdk_tgt/spdk_tgt.o 00:05:04.472 CC examples/ioat/verify/verify.o 00:05:04.472 CC test/dma/test_dma/test_dma.o 00:05:04.472 CC test/app/bdev_svc/bdev_svc.o 00:05:04.472 CC app/fio/bdev/fio_plugin.o 00:05:04.472 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:04.472 LINK spdk_lspci 00:05:04.472 CC test/env/mem_callbacks/mem_callbacks.o 00:05:04.744 LINK spdk_nvme_discover 00:05:04.744 LINK rpc_client_test 00:05:04.744 LINK vtophys 00:05:04.744 LINK interrupt_tgt 00:05:04.744 CXX test/cpp_headers/crc64.o 00:05:04.744 LINK zipf 00:05:04.744 CXX test/cpp_headers/dif.o 00:05:04.744 CXX test/cpp_headers/dma.o 00:05:04.744 LINK jsoncat 00:05:04.744 CXX test/cpp_headers/endian.o 00:05:04.744 CXX test/cpp_headers/env_dpdk.o 00:05:04.744 LINK poller_perf 00:05:04.744 LINK histogram_perf 00:05:04.744 CXX test/cpp_headers/env.o 00:05:04.744 CXX test/cpp_headers/event.o 00:05:04.744 CXX test/cpp_headers/fd_group.o 00:05:04.744 CXX test/cpp_headers/fd.o 00:05:04.744 CXX test/cpp_headers/file.o 00:05:04.744 CXX test/cpp_headers/ftl.o 00:05:04.744 LINK env_dpdk_post_init 00:05:04.744 CXX test/cpp_headers/gpt_spec.o 00:05:04.745 CXX test/cpp_headers/hexlify.o 00:05:04.745 CXX test/cpp_headers/histogram_data.o 00:05:04.745 LINK nvmf_tgt 00:05:04.745 LINK stub 00:05:04.745 LINK spdk_trace_record 00:05:04.745 LINK verify 00:05:04.745 CXX test/cpp_headers/idxd.o 00:05:04.745 CXX test/cpp_headers/idxd_spec.o 00:05:04.745 CXX test/cpp_headers/init.o 00:05:04.745 LINK ioat_perf 00:05:04.745 LINK iscsi_tgt 00:05:05.005 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:05.005 LINK bdev_svc 00:05:05.005 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:05.005 CXX test/cpp_headers/ioat.o 00:05:05.005 CXX test/cpp_headers/ioat_spec.o 00:05:05.005 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:05.005 CXX test/cpp_headers/iscsi_spec.o 00:05:05.005 LINK spdk_tgt 00:05:05.005 LINK mem_callbacks 00:05:05.005 CXX test/cpp_headers/json.o 00:05:05.005 CXX test/cpp_headers/jsonrpc.o 00:05:05.005 LINK pci_ut 00:05:05.272 CXX test/cpp_headers/keyring.o 00:05:05.272 CXX test/cpp_headers/keyring_module.o 00:05:05.272 CXX test/cpp_headers/likely.o 00:05:05.272 CXX test/cpp_headers/log.o 00:05:05.272 LINK test_dma 00:05:05.272 CXX test/cpp_headers/lvol.o 00:05:05.272 CXX test/cpp_headers/memory.o 00:05:05.272 CXX test/cpp_headers/mmio.o 00:05:05.272 CXX test/cpp_headers/nbd.o 00:05:05.272 CXX test/cpp_headers/net.o 00:05:05.272 CXX test/cpp_headers/notify.o 00:05:05.272 CXX test/cpp_headers/nvme.o 00:05:05.272 CXX test/cpp_headers/nvme_intel.o 00:05:05.272 LINK spdk_dd 00:05:05.272 CXX test/cpp_headers/nvme_ocssd.o 00:05:05.272 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:05.272 CXX test/cpp_headers/nvme_spec.o 00:05:05.272 CXX test/cpp_headers/nvme_zns.o 00:05:05.272 CXX test/cpp_headers/nvmf_cmd.o 00:05:05.272 LINK spdk_trace 00:05:05.272 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:05.272 CXX test/cpp_headers/nvmf.o 00:05:05.272 LINK nvme_fuzz 00:05:05.272 CXX test/cpp_headers/nvmf_spec.o 00:05:05.538 CXX test/cpp_headers/nvmf_transport.o 00:05:05.538 CXX test/cpp_headers/opal.o 00:05:05.538 CC examples/sock/hello_world/hello_sock.o 00:05:05.538 LINK spdk_bdev 00:05:05.538 CXX test/cpp_headers/opal_spec.o 00:05:05.538 CXX test/cpp_headers/pci_ids.o 00:05:05.538 LINK spdk_nvme 00:05:05.538 CXX test/cpp_headers/pipe.o 00:05:05.538 CC examples/vmd/led/led.o 00:05:05.538 CXX test/cpp_headers/queue.o 00:05:05.538 CXX test/cpp_headers/reduce.o 00:05:05.538 CC examples/vmd/lsvmd/lsvmd.o 00:05:05.538 CC examples/idxd/perf/perf.o 00:05:05.538 CC examples/thread/thread/thread_ex.o 00:05:05.538 CXX test/cpp_headers/rpc.o 00:05:05.538 CC test/event/reactor_perf/reactor_perf.o 00:05:05.538 CC test/event/reactor/reactor.o 00:05:05.538 CC test/event/event_perf/event_perf.o 00:05:05.538 CXX test/cpp_headers/scheduler.o 00:05:05.538 CXX test/cpp_headers/scsi.o 00:05:05.538 CC test/event/app_repeat/app_repeat.o 00:05:05.538 CXX test/cpp_headers/scsi_spec.o 00:05:05.800 CXX test/cpp_headers/sock.o 00:05:05.800 CXX test/cpp_headers/stdinc.o 00:05:05.800 CXX test/cpp_headers/string.o 00:05:05.800 CXX test/cpp_headers/thread.o 00:05:05.800 CC test/event/scheduler/scheduler.o 00:05:05.800 CXX test/cpp_headers/trace.o 00:05:05.800 CXX test/cpp_headers/trace_parser.o 00:05:05.800 CXX test/cpp_headers/tree.o 00:05:05.800 LINK spdk_nvme_perf 00:05:05.800 CXX test/cpp_headers/ublk.o 00:05:05.800 CXX test/cpp_headers/util.o 00:05:05.800 CXX test/cpp_headers/uuid.o 00:05:05.800 CXX test/cpp_headers/version.o 00:05:05.800 LINK memory_ut 00:05:05.800 CXX test/cpp_headers/vfio_user_pci.o 00:05:05.800 CXX test/cpp_headers/vfio_user_spec.o 00:05:05.800 CXX test/cpp_headers/vhost.o 00:05:05.800 LINK spdk_nvme_identify 00:05:05.800 CXX test/cpp_headers/vmd.o 00:05:05.800 CXX test/cpp_headers/xor.o 00:05:05.800 LINK spdk_top 00:05:05.800 CXX test/cpp_headers/zipf.o 00:05:05.800 LINK reactor_perf 00:05:05.800 LINK reactor 00:05:05.800 LINK lsvmd 00:05:06.065 LINK led 00:05:06.065 LINK hello_sock 00:05:06.065 LINK app_repeat 00:05:06.065 LINK event_perf 00:05:06.065 CC app/vhost/vhost.o 00:05:06.065 CC test/nvme/aer/aer.o 00:05:06.065 CC test/nvme/reset/reset.o 00:05:06.065 CC test/nvme/sgl/sgl.o 00:05:06.065 CC test/blobfs/mkfs/mkfs.o 00:05:06.065 CC test/accel/dif/dif.o 00:05:06.065 LINK vhost_fuzz 00:05:06.065 LINK thread 00:05:06.065 CC test/nvme/e2edp/nvme_dp.o 00:05:06.065 CC test/nvme/overhead/overhead.o 00:05:06.065 CC test/nvme/startup/startup.o 00:05:06.065 CC test/nvme/err_injection/err_injection.o 00:05:06.065 CC test/nvme/reserve/reserve.o 00:05:06.065 CC test/nvme/simple_copy/simple_copy.o 00:05:06.065 CC test/nvme/connect_stress/connect_stress.o 00:05:06.329 LINK idxd_perf 00:05:06.329 CC test/lvol/esnap/esnap.o 00:05:06.329 CC test/nvme/boot_partition/boot_partition.o 00:05:06.329 LINK scheduler 00:05:06.329 CC test/nvme/compliance/nvme_compliance.o 00:05:06.329 CC test/nvme/fused_ordering/fused_ordering.o 00:05:06.329 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:06.329 CC test/nvme/cuse/cuse.o 00:05:06.329 CC test/nvme/fdp/fdp.o 00:05:06.329 LINK vhost 00:05:06.329 LINK mkfs 00:05:06.588 LINK sgl 00:05:06.588 LINK reserve 00:05:06.588 LINK aer 00:05:06.588 LINK boot_partition 00:05:06.588 LINK simple_copy 00:05:06.588 LINK startup 00:05:06.588 LINK nvme_dp 00:05:06.588 LINK fused_ordering 00:05:06.588 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:06.588 LINK reset 00:05:06.588 CC examples/nvme/reconnect/reconnect.o 00:05:06.588 CC examples/nvme/abort/abort.o 00:05:06.588 CC examples/nvme/arbitration/arbitration.o 00:05:06.588 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:06.588 CC examples/nvme/hotplug/hotplug.o 00:05:06.588 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:06.588 CC examples/nvme/hello_world/hello_world.o 00:05:06.588 LINK err_injection 00:05:06.588 LINK connect_stress 00:05:06.588 LINK overhead 00:05:06.848 CC examples/accel/perf/accel_perf.o 00:05:06.848 LINK doorbell_aers 00:05:06.848 CC examples/blob/cli/blobcli.o 00:05:06.848 LINK nvme_compliance 00:05:06.848 LINK fdp 00:05:06.848 CC examples/blob/hello_world/hello_blob.o 00:05:06.848 LINK pmr_persistence 00:05:06.848 LINK dif 00:05:06.848 LINK cmb_copy 00:05:06.848 LINK hello_world 00:05:07.111 LINK hotplug 00:05:07.111 LINK arbitration 00:05:07.373 LINK abort 00:05:07.373 LINK blobcli 00:05:07.373 LINK hello_blob 00:05:07.373 LINK reconnect 00:05:07.634 LINK nvme_manage 00:05:07.634 CC test/bdev/bdevio/bdevio.o 00:05:07.634 LINK accel_perf 00:05:08.204 LINK cuse 00:05:08.465 CC examples/bdev/hello_world/hello_bdev.o 00:05:08.465 CC examples/bdev/bdevperf/bdevperf.o 00:05:08.724 LINK iscsi_fuzz 00:05:08.725 LINK bdevio 00:05:08.984 LINK hello_bdev 00:05:10.368 LINK bdevperf 00:05:11.310 CC examples/nvmf/nvmf/nvmf.o 00:05:11.880 LINK nvmf 00:05:17.191 LINK esnap 00:05:18.130 00:05:18.130 real 1m16.262s 00:05:18.130 user 9m21.819s 00:05:18.130 sys 2m8.699s 00:05:18.130 22:45:54 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:05:18.130 22:45:54 make -- common/autotest_common.sh@10 -- $ set +x 00:05:18.130 ************************************ 00:05:18.130 END TEST make 00:05:18.130 ************************************ 00:05:18.130 22:45:54 -- common/autotest_common.sh@1142 -- $ return 0 00:05:18.130 22:45:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:18.130 22:45:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:18.130 22:45:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:18.130 22:45:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.130 22:45:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:18.130 22:45:54 -- pm/common@44 -- $ pid=617806 00:05:18.130 22:45:54 -- pm/common@50 -- $ kill -TERM 617806 00:05:18.130 22:45:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.130 22:45:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:18.130 22:45:54 -- pm/common@44 -- $ pid=617808 00:05:18.130 22:45:54 -- pm/common@50 -- $ kill -TERM 617808 00:05:18.130 22:45:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.130 22:45:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:18.130 22:45:54 -- pm/common@44 -- $ pid=617810 00:05:18.130 22:45:54 -- pm/common@50 -- $ kill -TERM 617810 00:05:18.130 22:45:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.130 22:45:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:18.130 22:45:54 -- pm/common@44 -- $ pid=617836 00:05:18.130 22:45:54 -- pm/common@50 -- $ sudo -E kill -TERM 617836 00:05:18.130 22:45:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.130 22:45:54 -- nvmf/common.sh@7 -- # uname -s 00:05:18.130 22:45:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.130 22:45:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.130 22:45:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.130 22:45:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.130 22:45:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.130 22:45:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.130 22:45:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.130 22:45:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.130 22:45:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.130 22:45:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.130 22:45:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:18.130 22:45:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:18.130 22:45:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.130 22:45:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.130 22:45:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.130 22:45:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.130 22:45:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.131 22:45:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.131 22:45:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.131 22:45:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.131 22:45:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.131 22:45:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.131 22:45:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.131 22:45:54 -- paths/export.sh@5 -- # export PATH 00:05:18.131 22:45:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.131 22:45:54 -- nvmf/common.sh@47 -- # : 0 00:05:18.131 22:45:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:18.131 22:45:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:18.131 22:45:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.131 22:45:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.131 22:45:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.131 22:45:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:18.131 22:45:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:18.131 22:45:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:18.131 22:45:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:18.131 22:45:54 -- spdk/autotest.sh@32 -- # uname -s 00:05:18.131 22:45:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:18.131 22:45:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:18.131 22:45:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:18.131 22:45:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:18.131 22:45:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:18.131 22:45:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:18.131 22:45:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:18.131 22:45:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:18.131 22:45:54 -- spdk/autotest.sh@48 -- # udevadm_pid=704294 00:05:18.131 22:45:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:18.131 22:45:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:18.131 22:45:54 -- pm/common@17 -- # local monitor 00:05:18.131 22:45:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.131 22:45:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.131 22:45:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.131 22:45:54 -- pm/common@21 -- # date +%s 00:05:18.131 22:45:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.131 22:45:54 -- pm/common@21 -- # date +%s 00:05:18.131 22:45:54 -- pm/common@21 -- # date +%s 00:05:18.131 22:45:54 -- pm/common@25 -- # sleep 1 00:05:18.131 22:45:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721681154 00:05:18.131 22:45:54 -- pm/common@21 -- # date +%s 00:05:18.131 22:45:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721681154 00:05:18.131 22:45:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721681154 00:05:18.131 22:45:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721681154 00:05:18.390 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721681154_collect-vmstat.pm.log 00:05:18.390 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721681154_collect-cpu-load.pm.log 00:05:18.390 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721681154_collect-cpu-temp.pm.log 00:05:18.390 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721681154_collect-bmc-pm.bmc.pm.log 00:05:19.328 22:45:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:19.328 22:45:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:19.328 22:45:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.328 22:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:19.328 22:45:55 -- spdk/autotest.sh@59 -- # create_test_list 00:05:19.328 22:45:55 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:19.328 22:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:19.328 22:45:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:19.328 22:45:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.328 22:45:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.328 22:45:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:19.328 22:45:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.328 22:45:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:19.328 22:45:55 -- common/autotest_common.sh@1455 -- # uname 00:05:19.328 22:45:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:19.328 22:45:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:19.328 22:45:55 -- common/autotest_common.sh@1475 -- # uname 00:05:19.328 22:45:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:19.328 22:45:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:19.328 22:45:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:19.328 22:45:55 -- spdk/autotest.sh@72 -- # hash lcov 00:05:19.328 22:45:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:19.328 22:45:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:19.328 --rc lcov_branch_coverage=1 00:05:19.328 --rc lcov_function_coverage=1 00:05:19.328 --rc genhtml_branch_coverage=1 00:05:19.328 --rc genhtml_function_coverage=1 00:05:19.328 --rc genhtml_legend=1 00:05:19.328 --rc geninfo_all_blocks=1 00:05:19.328 ' 00:05:19.328 22:45:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:19.328 --rc lcov_branch_coverage=1 00:05:19.328 --rc lcov_function_coverage=1 00:05:19.328 --rc genhtml_branch_coverage=1 00:05:19.328 --rc genhtml_function_coverage=1 00:05:19.328 --rc genhtml_legend=1 00:05:19.328 --rc geninfo_all_blocks=1 00:05:19.328 ' 00:05:19.328 22:45:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:19.328 --rc lcov_branch_coverage=1 00:05:19.328 --rc lcov_function_coverage=1 00:05:19.328 --rc genhtml_branch_coverage=1 00:05:19.328 --rc genhtml_function_coverage=1 00:05:19.328 --rc genhtml_legend=1 00:05:19.328 --rc geninfo_all_blocks=1 00:05:19.328 --no-external' 00:05:19.328 22:45:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:19.328 --rc lcov_branch_coverage=1 00:05:19.328 --rc lcov_function_coverage=1 00:05:19.328 --rc genhtml_branch_coverage=1 00:05:19.328 --rc genhtml_function_coverage=1 00:05:19.328 --rc genhtml_legend=1 00:05:19.328 --rc geninfo_all_blocks=1 00:05:19.328 --no-external' 00:05:19.328 22:45:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:19.587 lcov: LCOV version 1.14 00:05:19.587 22:45:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:51.689 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:51.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:51.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:51.690 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:51.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:51.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:51.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:51.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:51.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:51.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:51.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:51.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:51.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:51.692 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:06:30.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:30.429 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:48.538 22:47:23 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:48.538 22:47:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.538 22:47:23 -- common/autotest_common.sh@10 -- # set +x 00:06:48.538 22:47:23 -- spdk/autotest.sh@91 -- # rm -f 00:06:48.538 22:47:23 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:49.474 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:06:49.733 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:06:49.733 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:06:49.733 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:06:49.733 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:06:49.733 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:06:49.733 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:06:49.733 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:06:49.733 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:06:49.733 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:06:49.733 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:06:49.733 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:06:49.733 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:06:49.993 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:06:49.993 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:06:49.993 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:06:49.993 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:06:49.993 22:47:26 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:49.993 22:47:26 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:49.993 22:47:26 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:49.993 22:47:26 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:49.993 22:47:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:49.993 22:47:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:49.993 22:47:26 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:49.993 22:47:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:49.993 22:47:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:49.993 22:47:26 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:49.993 22:47:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:49.993 22:47:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:49.993 22:47:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:49.993 22:47:26 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:49.993 22:47:26 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:50.253 No valid GPT data, bailing 00:06:50.253 22:47:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:50.253 22:47:26 -- scripts/common.sh@391 -- # pt= 00:06:50.253 22:47:26 -- scripts/common.sh@392 -- # return 1 00:06:50.253 22:47:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:50.253 1+0 records in 00:06:50.253 1+0 records out 00:06:50.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043779 s, 240 MB/s 00:06:50.253 22:47:26 -- spdk/autotest.sh@118 -- # sync 00:06:50.253 22:47:26 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:50.253 22:47:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:50.253 22:47:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:53.543 22:47:29 -- spdk/autotest.sh@124 -- # uname -s 00:06:53.543 22:47:29 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:53.543 22:47:29 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:53.543 22:47:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.543 22:47:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.543 22:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:53.543 ************************************ 00:06:53.543 START TEST setup.sh 00:06:53.543 ************************************ 00:06:53.543 22:47:29 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:53.543 * Looking for test storage... 00:06:53.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:53.543 22:47:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:53.543 22:47:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:53.543 22:47:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:53.543 22:47:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.543 22:47:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.543 22:47:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:53.543 ************************************ 00:06:53.543 START TEST acl 00:06:53.543 ************************************ 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:53.543 * Looking for test storage... 00:06:53.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:53.543 22:47:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:53.543 22:47:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:53.543 22:47:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:53.543 22:47:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:53.543 22:47:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:53.543 22:47:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:53.543 22:47:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:53.543 22:47:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:53.543 22:47:29 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:56.075 22:47:31 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:56.075 22:47:31 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:56.075 22:47:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.075 22:47:31 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:56.075 22:47:31 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:56.075 22:47:31 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:57.456 Hugepages 00:06:57.456 node hugesize free / total 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 00:06:57.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:57.456 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:57.457 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:57.715 22:47:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:57.715 22:47:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.715 22:47:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.715 22:47:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:57.715 ************************************ 00:06:57.715 START TEST denied 00:06:57.715 ************************************ 00:06:57.715 22:47:33 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:06:57.715 22:47:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:06:57.715 22:47:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:57.715 22:47:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:06:57.715 22:47:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.715 22:47:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:00.250 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:00.250 22:47:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:03.536 00:07:03.536 real 0m5.608s 00:07:03.536 user 0m1.759s 00:07:03.536 sys 0m2.906s 00:07:03.536 22:47:39 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.536 22:47:39 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:07:03.536 ************************************ 00:07:03.536 END TEST denied 00:07:03.536 ************************************ 00:07:03.536 22:47:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:03.536 22:47:39 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:03.536 22:47:39 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.536 22:47:39 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.536 22:47:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:03.536 ************************************ 00:07:03.536 START TEST allowed 00:07:03.536 ************************************ 00:07:03.536 22:47:39 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:07:03.536 22:47:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:07:03.536 22:47:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:07:03.536 22:47:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:07:03.536 22:47:39 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:03.536 22:47:39 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:07:06.827 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:07:06.827 22:47:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:07:06.827 22:47:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:07:06.827 22:47:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:07:06.827 22:47:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:06.827 22:47:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:08.732 00:07:08.733 real 0m5.305s 00:07:08.733 user 0m1.524s 00:07:08.733 sys 0m2.687s 00:07:08.733 22:47:44 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.733 22:47:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:07:08.733 ************************************ 00:07:08.733 END TEST allowed 00:07:08.733 ************************************ 00:07:08.733 22:47:44 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:08.733 00:07:08.733 real 0m15.347s 00:07:08.733 user 0m4.931s 00:07:08.733 sys 0m8.507s 00:07:08.733 22:47:44 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.733 22:47:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:08.733 ************************************ 00:07:08.733 END TEST acl 00:07:08.733 ************************************ 00:07:08.733 22:47:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:08.733 22:47:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:08.733 22:47:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.733 22:47:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.733 22:47:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:08.733 ************************************ 00:07:08.733 START TEST hugepages 00:07:08.733 ************************************ 00:07:08.733 22:47:44 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:08.733 * Looking for test storage... 00:07:08.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:08.733 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:08.733 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:08.733 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.995 22:47:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 26242656 kB' 'MemAvailable: 29808740 kB' 'Buffers: 2704 kB' 'Cached: 11244152 kB' 'SwapCached: 0 kB' 'Active: 8204656 kB' 'Inactive: 3492696 kB' 'Active(anon): 7812440 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453804 kB' 'Mapped: 151968 kB' 'Shmem: 7361944 kB' 'KReclaimable: 185988 kB' 'Slab: 510108 kB' 'SReclaimable: 185988 kB' 'SUnreclaim: 324120 kB' 'KernelStack: 12432 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304776 kB' 'Committed_AS: 8918036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.996 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:08.997 22:47:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:08.997 22:47:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.998 22:47:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.998 22:47:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 ************************************ 00:07:08.998 START TEST default_setup 00:07:08.998 ************************************ 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:07:08.998 22:47:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:10.934 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:10.934 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:10.934 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:10.934 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:11.194 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:11.194 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:11.194 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:11.194 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:11.194 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:12.145 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28350092 kB' 'MemAvailable: 31916048 kB' 'Buffers: 2704 kB' 'Cached: 11244240 kB' 'SwapCached: 0 kB' 'Active: 8222524 kB' 'Inactive: 3492696 kB' 'Active(anon): 7830308 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471548 kB' 'Mapped: 151988 kB' 'Shmem: 7362032 kB' 'KReclaimable: 185732 kB' 'Slab: 509592 kB' 'SReclaimable: 185732 kB' 'SUnreclaim: 323860 kB' 'KernelStack: 12544 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8934948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.145 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.146 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28353728 kB' 'MemAvailable: 31919684 kB' 'Buffers: 2704 kB' 'Cached: 11244248 kB' 'SwapCached: 0 kB' 'Active: 8222492 kB' 'Inactive: 3492696 kB' 'Active(anon): 7830276 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471480 kB' 'Mapped: 152000 kB' 'Shmem: 7362040 kB' 'KReclaimable: 185732 kB' 'Slab: 509600 kB' 'SReclaimable: 185732 kB' 'SUnreclaim: 323868 kB' 'KernelStack: 12304 kB' 'PageTables: 6844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8932720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195536 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.147 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.148 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.436 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28357908 kB' 'MemAvailable: 31923864 kB' 'Buffers: 2704 kB' 'Cached: 11244272 kB' 'SwapCached: 0 kB' 'Active: 8223948 kB' 'Inactive: 3492696 kB' 'Active(anon): 7831732 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473052 kB' 'Mapped: 152352 kB' 'Shmem: 7362064 kB' 'KReclaimable: 185732 kB' 'Slab: 509664 kB' 'SReclaimable: 185732 kB' 'SUnreclaim: 323932 kB' 'KernelStack: 12272 kB' 'PageTables: 7156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8935548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195488 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.437 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:12.438 nr_hugepages=1024 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:12.438 resv_hugepages=0 00:07:12.438 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:12.439 surplus_hugepages=0 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:12.439 anon_hugepages=0 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28353940 kB' 'MemAvailable: 31919896 kB' 'Buffers: 2704 kB' 'Cached: 11244292 kB' 'SwapCached: 0 kB' 'Active: 8227432 kB' 'Inactive: 3492696 kB' 'Active(anon): 7835216 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476560 kB' 'Mapped: 152680 kB' 'Shmem: 7362084 kB' 'KReclaimable: 185732 kB' 'Slab: 509664 kB' 'SReclaimable: 185732 kB' 'SUnreclaim: 323932 kB' 'KernelStack: 12288 kB' 'PageTables: 7272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8938868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195460 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.439 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:07:12.440 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 12448136 kB' 'MemUsed: 12124220 kB' 'SwapCached: 0 kB' 'Active: 5923532 kB' 'Inactive: 3257816 kB' 'Active(anon): 5790784 kB' 'Inactive(anon): 0 kB' 'Active(file): 132748 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8929000 kB' 'Mapped: 54360 kB' 'AnonPages: 255636 kB' 'Shmem: 5538436 kB' 'KernelStack: 6632 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127148 kB' 'Slab: 300884 kB' 'SReclaimable: 127148 kB' 'SUnreclaim: 173736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.441 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.442 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:12.443 node0=1024 expecting 1024 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:12.443 00:07:12.443 real 0m3.449s 00:07:12.443 user 0m1.116s 00:07:12.443 sys 0m1.489s 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.443 22:47:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:07:12.443 ************************************ 00:07:12.443 END TEST default_setup 00:07:12.443 ************************************ 00:07:12.443 22:47:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:12.443 22:47:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:12.443 22:47:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.443 22:47:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.443 22:47:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:12.443 ************************************ 00:07:12.443 START TEST per_node_1G_alloc 00:07:12.443 ************************************ 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:12.443 22:47:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:14.348 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:14.348 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:14.348 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:14.348 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:14.348 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:14.348 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:14.348 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:14.348 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:14.348 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:14.348 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:14.348 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:14.348 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:14.348 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:14.348 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:14.348 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:14.348 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:14.348 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.348 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28355376 kB' 'MemAvailable: 31921332 kB' 'Buffers: 2704 kB' 'Cached: 11244360 kB' 'SwapCached: 0 kB' 'Active: 8222144 kB' 'Inactive: 3492696 kB' 'Active(anon): 7829928 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471172 kB' 'Mapped: 151928 kB' 'Shmem: 7362152 kB' 'KReclaimable: 185732 kB' 'Slab: 509552 kB' 'SReclaimable: 185732 kB' 'SUnreclaim: 323820 kB' 'KernelStack: 12272 kB' 'PageTables: 7232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8934960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.614 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.615 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28356892 kB' 'MemAvailable: 31922848 kB' 'Buffers: 2704 kB' 'Cached: 11244360 kB' 'SwapCached: 0 kB' 'Active: 8223340 kB' 'Inactive: 3492696 kB' 'Active(anon): 7831124 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472352 kB' 'Mapped: 151936 kB' 'Shmem: 7362152 kB' 'KReclaimable: 185732 kB' 'Slab: 509552 kB' 'SReclaimable: 185732 kB' 'SUnreclaim: 323820 kB' 'KernelStack: 12624 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8935228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.616 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.617 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28357200 kB' 'MemAvailable: 31923144 kB' 'Buffers: 2704 kB' 'Cached: 11244384 kB' 'SwapCached: 0 kB' 'Active: 8222212 kB' 'Inactive: 3492696 kB' 'Active(anon): 7829996 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471032 kB' 'Mapped: 151960 kB' 'Shmem: 7362176 kB' 'KReclaimable: 185708 kB' 'Slab: 509524 kB' 'SReclaimable: 185708 kB' 'SUnreclaim: 323816 kB' 'KernelStack: 12544 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8935380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.618 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.620 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:14.621 nr_hugepages=1024 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:14.621 resv_hugepages=0 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:14.621 surplus_hugepages=0 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:14.621 anon_hugepages=0 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28353132 kB' 'MemAvailable: 31919076 kB' 'Buffers: 2704 kB' 'Cached: 11244408 kB' 'SwapCached: 0 kB' 'Active: 8223752 kB' 'Inactive: 3492696 kB' 'Active(anon): 7831536 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472608 kB' 'Mapped: 151960 kB' 'Shmem: 7362200 kB' 'KReclaimable: 185708 kB' 'Slab: 509524 kB' 'SReclaimable: 185708 kB' 'SUnreclaim: 323816 kB' 'KernelStack: 12784 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8933152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.621 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.622 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13483648 kB' 'MemUsed: 11088708 kB' 'SwapCached: 0 kB' 'Active: 5922400 kB' 'Inactive: 3257816 kB' 'Active(anon): 5789652 kB' 'Inactive(anon): 0 kB' 'Active(file): 132748 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8929072 kB' 'Mapped: 54360 kB' 'AnonPages: 254444 kB' 'Shmem: 5538508 kB' 'KernelStack: 6664 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127124 kB' 'Slab: 300804 kB' 'SReclaimable: 127124 kB' 'SUnreclaim: 173680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.623 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.624 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 14873060 kB' 'MemUsed: 4581232 kB' 'SwapCached: 0 kB' 'Active: 2299632 kB' 'Inactive: 234880 kB' 'Active(anon): 2040164 kB' 'Inactive(anon): 0 kB' 'Active(file): 259468 kB' 'Inactive(file): 234880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2318064 kB' 'Mapped: 97572 kB' 'AnonPages: 216528 kB' 'Shmem: 1823716 kB' 'KernelStack: 5672 kB' 'PageTables: 3496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 58584 kB' 'Slab: 208720 kB' 'SReclaimable: 58584 kB' 'SUnreclaim: 150136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.884 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:14.885 node0=512 expecting 512 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:14.885 node1=512 expecting 512 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:14.885 00:07:14.885 real 0m2.254s 00:07:14.885 user 0m0.984s 00:07:14.885 sys 0m1.256s 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.885 22:47:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:14.885 ************************************ 00:07:14.885 END TEST per_node_1G_alloc 00:07:14.885 ************************************ 00:07:14.885 22:47:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:14.885 22:47:51 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:14.885 22:47:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.885 22:47:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.885 22:47:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:14.885 ************************************ 00:07:14.885 START TEST even_2G_alloc 00:07:14.885 ************************************ 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:14.885 22:47:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:16.791 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:16.791 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:16.791 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:16.791 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:16.791 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:16.791 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:16.791 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:16.791 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:16.791 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:16.791 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:16.791 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:16.791 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:16.791 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:16.791 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:16.791 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:16.791 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:16.791 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28358540 kB' 'MemAvailable: 31924472 kB' 'Buffers: 2704 kB' 'Cached: 11244496 kB' 'SwapCached: 0 kB' 'Active: 8219572 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827356 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468276 kB' 'Mapped: 151008 kB' 'Shmem: 7362288 kB' 'KReclaimable: 185684 kB' 'Slab: 509340 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323656 kB' 'KernelStack: 12320 kB' 'PageTables: 6772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8920572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:16.791 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.792 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.792 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.792 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:16.792 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.792 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.792 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.057 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28357352 kB' 'MemAvailable: 31923284 kB' 'Buffers: 2704 kB' 'Cached: 11244500 kB' 'SwapCached: 0 kB' 'Active: 8219456 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827240 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468108 kB' 'Mapped: 150980 kB' 'Shmem: 7362292 kB' 'KReclaimable: 185684 kB' 'Slab: 509396 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323712 kB' 'KernelStack: 12528 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8921984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.058 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.059 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28357040 kB' 'MemAvailable: 31922972 kB' 'Buffers: 2704 kB' 'Cached: 11244520 kB' 'SwapCached: 0 kB' 'Active: 8218548 kB' 'Inactive: 3492696 kB' 'Active(anon): 7826332 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467556 kB' 'Mapped: 150972 kB' 'Shmem: 7362312 kB' 'KReclaimable: 185684 kB' 'Slab: 509396 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323712 kB' 'KernelStack: 12288 kB' 'PageTables: 6924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8922008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.060 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.061 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:17.062 nr_hugepages=1024 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:17.062 resv_hugepages=0 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:17.062 surplus_hugepages=0 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:17.062 anon_hugepages=0 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28352652 kB' 'MemAvailable: 31918584 kB' 'Buffers: 2704 kB' 'Cached: 11244544 kB' 'SwapCached: 0 kB' 'Active: 8220368 kB' 'Inactive: 3492696 kB' 'Active(anon): 7828152 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469076 kB' 'Mapped: 150972 kB' 'Shmem: 7362336 kB' 'KReclaimable: 185684 kB' 'Slab: 509396 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323712 kB' 'KernelStack: 12768 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8919784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.062 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.063 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13490200 kB' 'MemUsed: 11082156 kB' 'SwapCached: 0 kB' 'Active: 5918920 kB' 'Inactive: 3257816 kB' 'Active(anon): 5786172 kB' 'Inactive(anon): 0 kB' 'Active(file): 132748 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8929136 kB' 'Mapped: 53604 kB' 'AnonPages: 250828 kB' 'Shmem: 5538572 kB' 'KernelStack: 6600 kB' 'PageTables: 3388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 300760 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 173660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.064 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.065 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 14863696 kB' 'MemUsed: 4590596 kB' 'SwapCached: 0 kB' 'Active: 2300000 kB' 'Inactive: 234880 kB' 'Active(anon): 2040532 kB' 'Inactive(anon): 0 kB' 'Active(file): 259468 kB' 'Inactive(file): 234880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2318156 kB' 'Mapped: 97340 kB' 'AnonPages: 216788 kB' 'Shmem: 1823808 kB' 'KernelStack: 5672 kB' 'PageTables: 3556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 58584 kB' 'Slab: 208692 kB' 'SReclaimable: 58584 kB' 'SUnreclaim: 150108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.326 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:17.327 node0=512 expecting 512 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:17.327 node1=512 expecting 512 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:17.327 00:07:17.327 real 0m2.363s 00:07:17.327 user 0m1.016s 00:07:17.327 sys 0m1.330s 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.327 22:47:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:17.327 ************************************ 00:07:17.327 END TEST even_2G_alloc 00:07:17.327 ************************************ 00:07:17.327 22:47:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:17.327 22:47:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:17.328 22:47:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.328 22:47:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.328 22:47:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:17.328 ************************************ 00:07:17.328 START TEST odd_alloc 00:07:17.328 ************************************ 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:17.328 22:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:19.237 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:19.237 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:19.237 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:19.237 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:19.237 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:19.237 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:19.237 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:19.237 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:19.237 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:19.237 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:19.237 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:19.237 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:19.237 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:19.237 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:19.237 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:19.237 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:19.237 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.237 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28317592 kB' 'MemAvailable: 31883524 kB' 'Buffers: 2704 kB' 'Cached: 11244632 kB' 'SwapCached: 0 kB' 'Active: 8219692 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827476 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468192 kB' 'Mapped: 151092 kB' 'Shmem: 7362424 kB' 'KReclaimable: 185684 kB' 'Slab: 509416 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323732 kB' 'KernelStack: 12288 kB' 'PageTables: 7012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 8919856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.238 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28316896 kB' 'MemAvailable: 31882828 kB' 'Buffers: 2704 kB' 'Cached: 11244636 kB' 'SwapCached: 0 kB' 'Active: 8219168 kB' 'Inactive: 3492696 kB' 'Active(anon): 7826952 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467708 kB' 'Mapped: 150952 kB' 'Shmem: 7362428 kB' 'KReclaimable: 185684 kB' 'Slab: 509376 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323692 kB' 'KernelStack: 12272 kB' 'PageTables: 6888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 8919508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195568 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.239 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.240 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28316116 kB' 'MemAvailable: 31882048 kB' 'Buffers: 2704 kB' 'Cached: 11244640 kB' 'SwapCached: 0 kB' 'Active: 8219148 kB' 'Inactive: 3492696 kB' 'Active(anon): 7826932 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467756 kB' 'Mapped: 150976 kB' 'Shmem: 7362432 kB' 'KReclaimable: 185684 kB' 'Slab: 509376 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323692 kB' 'KernelStack: 12256 kB' 'PageTables: 6848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 8921148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.241 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.242 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:19.506 nr_hugepages=1025 00:07:19.506 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:19.506 resv_hugepages=0 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:19.507 surplus_hugepages=0 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:19.507 anon_hugepages=0 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28316100 kB' 'MemAvailable: 31882032 kB' 'Buffers: 2704 kB' 'Cached: 11244644 kB' 'SwapCached: 0 kB' 'Active: 8219832 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827616 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468464 kB' 'Mapped: 150976 kB' 'Shmem: 7362436 kB' 'KReclaimable: 185684 kB' 'Slab: 509368 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323684 kB' 'KernelStack: 12528 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 8922292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.507 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.508 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13474272 kB' 'MemUsed: 11098084 kB' 'SwapCached: 0 kB' 'Active: 5919656 kB' 'Inactive: 3257816 kB' 'Active(anon): 5786908 kB' 'Inactive(anon): 0 kB' 'Active(file): 132748 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8929212 kB' 'Mapped: 53592 kB' 'AnonPages: 251448 kB' 'Shmem: 5538648 kB' 'KernelStack: 6616 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 300768 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 173668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.509 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 14844924 kB' 'MemUsed: 4609368 kB' 'SwapCached: 0 kB' 'Active: 2300448 kB' 'Inactive: 234880 kB' 'Active(anon): 2040980 kB' 'Inactive(anon): 0 kB' 'Active(file): 259468 kB' 'Inactive(file): 234880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2318196 kB' 'Mapped: 97384 kB' 'AnonPages: 217244 kB' 'Shmem: 1823848 kB' 'KernelStack: 5992 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 58584 kB' 'Slab: 208632 kB' 'SReclaimable: 58584 kB' 'SUnreclaim: 150048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.510 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:07:19.511 node0=512 expecting 513 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:07:19.511 node1=513 expecting 512 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:07:19.511 00:07:19.511 real 0m2.214s 00:07:19.511 user 0m0.897s 00:07:19.511 sys 0m1.305s 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.511 22:47:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:19.511 ************************************ 00:07:19.511 END TEST odd_alloc 00:07:19.511 ************************************ 00:07:19.511 22:47:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:19.512 22:47:55 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:19.512 22:47:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.512 22:47:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.512 22:47:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:19.512 ************************************ 00:07:19.512 START TEST custom_alloc 00:07:19.512 ************************************ 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:19.512 22:47:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:21.421 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:21.421 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:21.421 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:21.421 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:21.421 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:21.421 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:21.421 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:21.421 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:21.421 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:21.421 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:21.421 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:21.421 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:21.421 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:21.421 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:21.421 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:21.421 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:21.421 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:21.685 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 27277808 kB' 'MemAvailable: 30843740 kB' 'Buffers: 2704 kB' 'Cached: 11244772 kB' 'SwapCached: 0 kB' 'Active: 8219560 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827344 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467904 kB' 'Mapped: 151020 kB' 'Shmem: 7362564 kB' 'KReclaimable: 185684 kB' 'Slab: 509564 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323880 kB' 'KernelStack: 12352 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 8920252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.686 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 27277980 kB' 'MemAvailable: 30843912 kB' 'Buffers: 2704 kB' 'Cached: 11244772 kB' 'SwapCached: 0 kB' 'Active: 8219240 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827024 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467628 kB' 'Mapped: 150956 kB' 'Shmem: 7362564 kB' 'KReclaimable: 185684 kB' 'Slab: 509588 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323904 kB' 'KernelStack: 12304 kB' 'PageTables: 6888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 8920268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.687 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.688 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 27278892 kB' 'MemAvailable: 30844824 kB' 'Buffers: 2704 kB' 'Cached: 11244792 kB' 'SwapCached: 0 kB' 'Active: 8219724 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827508 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468144 kB' 'Mapped: 150956 kB' 'Shmem: 7362584 kB' 'KReclaimable: 185684 kB' 'Slab: 509588 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323904 kB' 'KernelStack: 12304 kB' 'PageTables: 6908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 8919920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195520 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.689 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.690 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:07:21.691 nr_hugepages=1536 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:21.691 resv_hugepages=0 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:21.691 surplus_hugepages=0 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:21.691 anon_hugepages=0 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 27278752 kB' 'MemAvailable: 30844684 kB' 'Buffers: 2704 kB' 'Cached: 11244816 kB' 'SwapCached: 0 kB' 'Active: 8219428 kB' 'Inactive: 3492696 kB' 'Active(anon): 7827212 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467828 kB' 'Mapped: 150984 kB' 'Shmem: 7362608 kB' 'KReclaimable: 185684 kB' 'Slab: 509588 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323904 kB' 'KernelStack: 12320 kB' 'PageTables: 6612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 8922328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.691 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.692 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13474532 kB' 'MemUsed: 11097824 kB' 'SwapCached: 0 kB' 'Active: 5920380 kB' 'Inactive: 3257816 kB' 'Active(anon): 5787632 kB' 'Inactive(anon): 0 kB' 'Active(file): 132748 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8929280 kB' 'Mapped: 53592 kB' 'AnonPages: 252060 kB' 'Shmem: 5538716 kB' 'KernelStack: 6504 kB' 'PageTables: 3020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 300876 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 173776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.955 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.956 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 13804632 kB' 'MemUsed: 5649660 kB' 'SwapCached: 0 kB' 'Active: 2299552 kB' 'Inactive: 234880 kB' 'Active(anon): 2040084 kB' 'Inactive(anon): 0 kB' 'Active(file): 259468 kB' 'Inactive(file): 234880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2318256 kB' 'Mapped: 97392 kB' 'AnonPages: 216248 kB' 'Shmem: 1823908 kB' 'KernelStack: 5992 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 58584 kB' 'Slab: 208712 kB' 'SReclaimable: 58584 kB' 'SUnreclaim: 150128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.957 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:21.958 node0=512 expecting 512 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:07:21.958 node1=1024 expecting 1024 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:07:21.958 00:07:21.958 real 0m2.324s 00:07:21.958 user 0m0.975s 00:07:21.958 sys 0m1.333s 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.958 22:47:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:21.958 ************************************ 00:07:21.958 END TEST custom_alloc 00:07:21.958 ************************************ 00:07:21.958 22:47:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:21.958 22:47:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:21.958 22:47:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.958 22:47:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.958 22:47:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:21.958 ************************************ 00:07:21.958 START TEST no_shrink_alloc 00:07:21.958 ************************************ 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:21.958 22:47:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:23.867 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:23.867 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:23.867 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:23.867 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:23.867 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:23.867 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:23.867 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:23.867 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:23.867 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:23.867 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:23.867 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:23.867 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:23.867 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:23.867 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:23.867 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:23.867 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:23.867 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.867 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28315720 kB' 'MemAvailable: 31881652 kB' 'Buffers: 2704 kB' 'Cached: 11244908 kB' 'SwapCached: 0 kB' 'Active: 8227064 kB' 'Inactive: 3492696 kB' 'Active(anon): 7834848 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475068 kB' 'Mapped: 151964 kB' 'Shmem: 7362700 kB' 'KReclaimable: 185684 kB' 'Slab: 509616 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323932 kB' 'KernelStack: 12416 kB' 'PageTables: 7228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8931508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195652 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:23.868 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.132 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28314136 kB' 'MemAvailable: 31880068 kB' 'Buffers: 2704 kB' 'Cached: 11244912 kB' 'SwapCached: 0 kB' 'Active: 8223272 kB' 'Inactive: 3492696 kB' 'Active(anon): 7831056 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471796 kB' 'Mapped: 151484 kB' 'Shmem: 7362704 kB' 'KReclaimable: 185684 kB' 'Slab: 509616 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323932 kB' 'KernelStack: 12464 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8928348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.133 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28308484 kB' 'MemAvailable: 31874416 kB' 'Buffers: 2704 kB' 'Cached: 11244928 kB' 'SwapCached: 0 kB' 'Active: 8226612 kB' 'Inactive: 3492696 kB' 'Active(anon): 7834396 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475080 kB' 'Mapped: 151836 kB' 'Shmem: 7362720 kB' 'KReclaimable: 185684 kB' 'Slab: 509616 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323932 kB' 'KernelStack: 12464 kB' 'PageTables: 7360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8931548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195604 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.134 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:24.135 nr_hugepages=1024 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:24.135 resv_hugepages=0 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:24.135 surplus_hugepages=0 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:24.135 anon_hugepages=0 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28308932 kB' 'MemAvailable: 31874864 kB' 'Buffers: 2704 kB' 'Cached: 11244952 kB' 'SwapCached: 0 kB' 'Active: 8222616 kB' 'Inactive: 3492696 kB' 'Active(anon): 7830400 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470948 kB' 'Mapped: 151544 kB' 'Shmem: 7362744 kB' 'KReclaimable: 185684 kB' 'Slab: 509616 kB' 'SReclaimable: 185684 kB' 'SUnreclaim: 323932 kB' 'KernelStack: 12416 kB' 'PageTables: 7176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8927496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.135 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 12431640 kB' 'MemUsed: 12140716 kB' 'SwapCached: 0 kB' 'Active: 5922160 kB' 'Inactive: 3257816 kB' 'Active(anon): 5789412 kB' 'Inactive(anon): 0 kB' 'Active(file): 132748 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8929332 kB' 'Mapped: 53592 kB' 'AnonPages: 253800 kB' 'Shmem: 5538768 kB' 'KernelStack: 6664 kB' 'PageTables: 3140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127100 kB' 'Slab: 300944 kB' 'SReclaimable: 127100 kB' 'SUnreclaim: 173844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.136 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:24.137 node0=1024 expecting 1024 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:24.137 22:48:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:26.043 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:26.043 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:26.043 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:26.043 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:26.043 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:26.043 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:26.043 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:26.043 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:26.043 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:26.043 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:26.043 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:26.043 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:26.043 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:26.043 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:26.043 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:26.043 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:26.043 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:26.043 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28258188 kB' 'MemAvailable: 31824116 kB' 'Buffers: 2704 kB' 'Cached: 11245024 kB' 'SwapCached: 0 kB' 'Active: 8227428 kB' 'Inactive: 3492696 kB' 'Active(anon): 7835212 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475724 kB' 'Mapped: 151984 kB' 'Shmem: 7362816 kB' 'KReclaimable: 185676 kB' 'Slab: 509832 kB' 'SReclaimable: 185676 kB' 'SUnreclaim: 324156 kB' 'KernelStack: 12480 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8932124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195684 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.307 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28256808 kB' 'MemAvailable: 31822736 kB' 'Buffers: 2704 kB' 'Cached: 11245024 kB' 'SwapCached: 0 kB' 'Active: 8224124 kB' 'Inactive: 3492696 kB' 'Active(anon): 7831908 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472384 kB' 'Mapped: 151724 kB' 'Shmem: 7362816 kB' 'KReclaimable: 185676 kB' 'Slab: 509820 kB' 'SReclaimable: 185676 kB' 'SUnreclaim: 324144 kB' 'KernelStack: 12448 kB' 'PageTables: 7240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8929248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.308 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.309 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28261436 kB' 'MemAvailable: 31827364 kB' 'Buffers: 2704 kB' 'Cached: 11245060 kB' 'SwapCached: 0 kB' 'Active: 8220932 kB' 'Inactive: 3492696 kB' 'Active(anon): 7828716 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469140 kB' 'Mapped: 151312 kB' 'Shmem: 7362852 kB' 'KReclaimable: 185676 kB' 'Slab: 509820 kB' 'SReclaimable: 185676 kB' 'SUnreclaim: 324144 kB' 'KernelStack: 12432 kB' 'PageTables: 7212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8923836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.310 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.311 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:26.312 nr_hugepages=1024 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:26.312 resv_hugepages=0 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:26.312 surplus_hugepages=0 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:26.312 anon_hugepages=0 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:26.312 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28261128 kB' 'MemAvailable: 31827056 kB' 'Buffers: 2704 kB' 'Cached: 11245064 kB' 'SwapCached: 0 kB' 'Active: 8224296 kB' 'Inactive: 3492696 kB' 'Active(anon): 7832080 kB' 'Inactive(anon): 0 kB' 'Active(file): 392216 kB' 'Inactive(file): 3492696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472540 kB' 'Mapped: 151736 kB' 'Shmem: 7362856 kB' 'KReclaimable: 185676 kB' 'Slab: 509816 kB' 'SReclaimable: 185676 kB' 'SUnreclaim: 324140 kB' 'KernelStack: 12464 kB' 'PageTables: 7304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 8929424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1320540 kB' 'DirectMap2M: 10133504 kB' 'DirectMap1G: 40894464 kB' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.313 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.314 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:26.575 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 12393568 kB' 'MemUsed: 12178788 kB' 'SwapCached: 0 kB' 'Active: 5922080 kB' 'Inactive: 3257816 kB' 'Active(anon): 5789332 kB' 'Inactive(anon): 0 kB' 'Active(file): 132748 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8929336 kB' 'Mapped: 53744 kB' 'AnonPages: 253780 kB' 'Shmem: 5538772 kB' 'KernelStack: 6648 kB' 'PageTables: 3556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127092 kB' 'Slab: 301040 kB' 'SReclaimable: 127092 kB' 'SUnreclaim: 173948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.576 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:26.577 node0=1024 expecting 1024 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:26.577 00:07:26.577 real 0m4.486s 00:07:26.577 user 0m1.915s 00:07:26.577 sys 0m2.543s 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.577 22:48:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:26.577 ************************************ 00:07:26.577 END TEST no_shrink_alloc 00:07:26.577 ************************************ 00:07:26.577 22:48:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:26.577 22:48:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:26.577 00:07:26.577 real 0m17.766s 00:07:26.577 user 0m7.202s 00:07:26.577 sys 0m9.669s 00:07:26.577 22:48:02 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.577 22:48:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:26.577 ************************************ 00:07:26.577 END TEST hugepages 00:07:26.577 ************************************ 00:07:26.577 22:48:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:26.577 22:48:02 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:26.577 22:48:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.577 22:48:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.577 22:48:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:26.577 ************************************ 00:07:26.577 START TEST driver 00:07:26.577 ************************************ 00:07:26.577 22:48:02 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:26.577 * Looking for test storage... 00:07:26.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:26.577 22:48:02 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:26.577 22:48:02 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:26.837 22:48:02 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:30.129 22:48:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:30.129 22:48:06 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.129 22:48:06 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.129 22:48:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:30.129 ************************************ 00:07:30.129 START TEST guess_driver 00:07:30.129 ************************************ 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:30.129 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:30.129 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:30.129 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:30.129 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:30.129 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:30.129 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:30.129 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:30.129 Looking for driver=vfio-pci 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:30.129 22:48:06 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.048 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:32.307 22:48:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:33.247 22:48:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:33.247 22:48:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:33.247 22:48:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:33.247 22:48:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:33.247 22:48:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:33.247 22:48:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:33.247 22:48:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:36.538 00:07:36.538 real 0m6.494s 00:07:36.539 user 0m1.673s 00:07:36.539 sys 0m2.937s 00:07:36.539 22:48:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.539 22:48:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:36.539 ************************************ 00:07:36.539 END TEST guess_driver 00:07:36.539 ************************************ 00:07:36.539 22:48:12 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:07:36.539 00:07:36.539 real 0m10.018s 00:07:36.539 user 0m2.495s 00:07:36.539 sys 0m4.603s 00:07:36.539 22:48:12 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.539 22:48:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:36.539 ************************************ 00:07:36.539 END TEST driver 00:07:36.539 ************************************ 00:07:36.799 22:48:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:36.799 22:48:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:36.799 22:48:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.799 22:48:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.799 22:48:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:36.799 ************************************ 00:07:36.799 START TEST devices 00:07:36.799 ************************************ 00:07:36.799 22:48:12 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:36.799 * Looking for test storage... 00:07:36.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:36.799 22:48:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:36.799 22:48:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:36.799 22:48:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:36.799 22:48:12 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:39.337 22:48:15 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:39.337 No valid GPT data, bailing 00:07:39.337 22:48:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:39.337 22:48:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:39.337 22:48:15 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:39.337 22:48:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.337 22:48:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:39.597 ************************************ 00:07:39.597 START TEST nvme_mount 00:07:39.597 ************************************ 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:39.597 22:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:40.537 Creating new GPT entries in memory. 00:07:40.537 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:40.537 other utilities. 00:07:40.537 22:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:40.537 22:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:40.537 22:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:40.537 22:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:40.537 22:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:41.476 Creating new GPT entries in memory. 00:07:41.476 The operation has completed successfully. 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 733026 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:41.476 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.736 22:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:43.644 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:43.645 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:43.645 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:43.645 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:43.645 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:43.645 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:43.645 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:43.645 22:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:43.905 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:43.905 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:43.905 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:43.905 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:43.905 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:43.906 22:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.820 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:45.821 22:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:45.821 22:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.729 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:47.730 22:48:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:47.991 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:47.991 00:07:47.991 real 0m8.460s 00:07:47.991 user 0m2.207s 00:07:47.991 sys 0m3.927s 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.991 22:48:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:47.991 ************************************ 00:07:47.991 END TEST nvme_mount 00:07:47.991 ************************************ 00:07:47.991 22:48:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:07:47.991 22:48:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:47.991 22:48:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.991 22:48:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.991 22:48:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:47.991 ************************************ 00:07:47.991 START TEST dm_mount 00:07:47.991 ************************************ 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:47.991 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:47.992 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:47.992 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:47.992 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:47.992 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:47.992 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:47.992 22:48:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:48.935 Creating new GPT entries in memory. 00:07:48.935 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:48.935 other utilities. 00:07:48.935 22:48:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:48.935 22:48:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:48.935 22:48:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:48.935 22:48:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:48.935 22:48:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:50.316 Creating new GPT entries in memory. 00:07:50.316 The operation has completed successfully. 00:07:50.316 22:48:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:50.316 22:48:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:50.316 22:48:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:50.316 22:48:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:50.316 22:48:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:07:51.257 The operation has completed successfully. 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 735580 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:51.257 22:48:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:53.167 22:48:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:55.074 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:55.335 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:55.335 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:55.335 22:48:31 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:55.335 00:07:55.335 real 0m7.206s 00:07:55.335 user 0m1.443s 00:07:55.335 sys 0m2.654s 00:07:55.335 22:48:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.335 22:48:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:55.335 ************************************ 00:07:55.335 END TEST dm_mount 00:07:55.335 ************************************ 00:07:55.335 22:48:31 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:07:55.335 22:48:31 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:55.336 22:48:31 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:55.336 22:48:31 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:55.336 22:48:31 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:55.336 22:48:31 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:55.336 22:48:31 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:55.336 22:48:31 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:55.605 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:55.605 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:55.605 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:55.605 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:55.605 22:48:31 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:55.605 22:48:31 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:55.605 22:48:31 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:55.605 22:48:31 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:55.605 22:48:31 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:55.605 22:48:31 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:55.605 22:48:31 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:55.605 00:07:55.605 real 0m18.827s 00:07:55.605 user 0m4.754s 00:07:55.605 sys 0m8.462s 00:07:55.605 22:48:31 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.605 22:48:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:55.605 ************************************ 00:07:55.605 END TEST devices 00:07:55.605 ************************************ 00:07:55.605 22:48:31 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:55.605 00:07:55.605 real 1m2.357s 00:07:55.605 user 0m19.542s 00:07:55.605 sys 0m31.507s 00:07:55.605 22:48:31 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.605 22:48:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:55.605 ************************************ 00:07:55.605 END TEST setup.sh 00:07:55.605 ************************************ 00:07:55.605 22:48:31 -- common/autotest_common.sh@1142 -- # return 0 00:07:55.605 22:48:31 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:57.524 Hugepages 00:07:57.524 node hugesize free / total 00:07:57.524 node0 1048576kB 0 / 0 00:07:57.524 node0 2048kB 2048 / 2048 00:07:57.524 node1 1048576kB 0 / 0 00:07:57.524 node1 2048kB 0 / 0 00:07:57.524 00:07:57.524 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:57.524 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:07:57.524 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:07:57.524 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:07:57.524 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:07:57.524 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:07:57.524 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:07:57.524 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:07:57.524 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:07:57.524 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:07:57.524 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:07:57.524 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:07:57.786 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:07:57.786 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:07:57.786 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:07:57.787 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:07:57.787 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:07:57.787 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:57.787 22:48:33 -- spdk/autotest.sh@130 -- # uname -s 00:07:57.787 22:48:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:57.787 22:48:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:57.787 22:48:33 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:59.691 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:59.691 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:59.691 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:59.691 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:59.691 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:59.691 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:59.691 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:59.691 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:59.691 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:59.691 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:59.691 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:59.691 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:59.691 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:59.950 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:59.950 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:59.950 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:00.890 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:08:00.890 22:48:37 -- common/autotest_common.sh@1532 -- # sleep 1 00:08:01.828 22:48:38 -- common/autotest_common.sh@1533 -- # bdfs=() 00:08:01.828 22:48:38 -- common/autotest_common.sh@1533 -- # local bdfs 00:08:01.828 22:48:38 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.828 22:48:38 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:08:01.828 22:48:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:01.828 22:48:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:01.828 22:48:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.828 22:48:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:01.828 22:48:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:02.088 22:48:38 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:02.088 22:48:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:08:02.088 22:48:38 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:03.994 Waiting for block devices as requested 00:08:03.994 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:08:03.994 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:08:03.994 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:08:03.994 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:08:04.254 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:08:04.254 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:08:04.254 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:08:04.254 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:08:04.514 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:08:04.514 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:08:04.514 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:08:04.773 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:08:04.773 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:08:04.773 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:08:05.032 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:08:05.032 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:08:05.032 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:08:05.291 22:48:41 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:05.291 22:48:41 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:08:05.291 22:48:41 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:08:05.291 22:48:41 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:08:05.291 22:48:41 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:05.291 22:48:41 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:05.291 22:48:41 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:08:05.291 22:48:41 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:05.291 22:48:41 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:05.291 22:48:41 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:08:05.291 22:48:41 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:05.291 22:48:41 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:05.291 22:48:41 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:05.291 22:48:41 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:05.291 22:48:41 -- common/autotest_common.sh@1557 -- # continue 00:08:05.291 22:48:41 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:05.291 22:48:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.291 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:05.291 22:48:41 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:05.291 22:48:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.291 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:05.291 22:48:41 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:07.194 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:07.194 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:07.194 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:07.194 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:07.194 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:07.194 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:07.194 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:07.194 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:07.194 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:08.131 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:08:08.391 22:48:44 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:08.391 22:48:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.391 22:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:08.391 22:48:44 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:08.391 22:48:44 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:08:08.391 22:48:44 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:08:08.391 22:48:44 -- common/autotest_common.sh@1577 -- # bdfs=() 00:08:08.391 22:48:44 -- common/autotest_common.sh@1577 -- # local bdfs 00:08:08.391 22:48:44 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:08:08.391 22:48:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:08.391 22:48:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:08.391 22:48:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:08.391 22:48:44 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:08.391 22:48:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:08.391 22:48:44 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:08.391 22:48:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:08:08.391 22:48:44 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:08.391 22:48:44 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:08:08.391 22:48:44 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:08:08.391 22:48:44 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:08.391 22:48:44 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:08:08.391 22:48:44 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:08:08.391 22:48:44 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:08:08.391 22:48:44 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=741221 00:08:08.391 22:48:44 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:08.391 22:48:44 -- common/autotest_common.sh@1598 -- # waitforlisten 741221 00:08:08.391 22:48:44 -- common/autotest_common.sh@829 -- # '[' -z 741221 ']' 00:08:08.391 22:48:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.391 22:48:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.391 22:48:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.391 22:48:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.391 22:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:08.650 [2024-07-22 22:48:44.775687] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:08.650 [2024-07-22 22:48:44.775875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741221 ] 00:08:08.650 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.650 [2024-07-22 22:48:44.922543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.910 [2024-07-22 22:48:45.072532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.477 22:48:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.477 22:48:45 -- common/autotest_common.sh@862 -- # return 0 00:08:09.477 22:48:45 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:08:09.477 22:48:45 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:08:09.477 22:48:45 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:08:12.767 nvme0n1 00:08:12.767 22:48:48 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:13.334 [2024-07-22 22:48:49.493937] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:08:13.334 [2024-07-22 22:48:49.494034] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:08:13.334 request: 00:08:13.334 { 00:08:13.334 "nvme_ctrlr_name": "nvme0", 00:08:13.334 "password": "test", 00:08:13.334 "method": "bdev_nvme_opal_revert", 00:08:13.334 "req_id": 1 00:08:13.334 } 00:08:13.334 Got JSON-RPC error response 00:08:13.334 response: 00:08:13.334 { 00:08:13.334 "code": -32603, 00:08:13.334 "message": "Internal error" 00:08:13.334 } 00:08:13.334 22:48:49 -- common/autotest_common.sh@1604 -- # true 00:08:13.334 22:48:49 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:08:13.334 22:48:49 -- common/autotest_common.sh@1608 -- # killprocess 741221 00:08:13.334 22:48:49 -- common/autotest_common.sh@948 -- # '[' -z 741221 ']' 00:08:13.334 22:48:49 -- common/autotest_common.sh@952 -- # kill -0 741221 00:08:13.334 22:48:49 -- common/autotest_common.sh@953 -- # uname 00:08:13.334 22:48:49 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.334 22:48:49 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741221 00:08:13.334 22:48:49 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.334 22:48:49 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.334 22:48:49 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741221' 00:08:13.334 killing process with pid 741221 00:08:13.334 22:48:49 -- common/autotest_common.sh@967 -- # kill 741221 00:08:13.334 22:48:49 -- common/autotest_common.sh@972 -- # wait 741221 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.594 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:13.595 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:15.502 22:48:51 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:08:15.502 22:48:51 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:15.502 22:48:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:15.502 22:48:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:15.502 22:48:51 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:15.502 22:48:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.502 22:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 22:48:51 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:15.502 22:48:51 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:15.502 22:48:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.502 22:48:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.502 22:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 ************************************ 00:08:15.502 START TEST env 00:08:15.502 ************************************ 00:08:15.502 22:48:51 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:15.502 * Looking for test storage... 00:08:15.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:15.502 22:48:51 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:15.502 22:48:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.502 22:48:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.502 22:48:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 ************************************ 00:08:15.502 START TEST env_memory 00:08:15.502 ************************************ 00:08:15.502 22:48:51 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:15.502 00:08:15.502 00:08:15.502 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.502 http://cunit.sourceforge.net/ 00:08:15.502 00:08:15.502 00:08:15.502 Suite: memory 00:08:15.502 Test: alloc and free memory map ...[2024-07-22 22:48:51.772549] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:15.502 passed 00:08:15.762 Test: mem map translation ...[2024-07-22 22:48:51.823341] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:15.762 [2024-07-22 22:48:51.823406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:15.762 [2024-07-22 22:48:51.823524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:15.762 [2024-07-22 22:48:51.823555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:15.762 passed 00:08:15.762 Test: mem map registration ...[2024-07-22 22:48:51.941697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:15.762 [2024-07-22 22:48:51.941753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:15.762 passed 00:08:16.024 Test: mem map adjacent registrations ...passed 00:08:16.024 00:08:16.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.024 suites 1 1 n/a 0 0 00:08:16.024 tests 4 4 4 0 0 00:08:16.024 asserts 152 152 152 0 n/a 00:08:16.024 00:08:16.024 Elapsed time = 0.380 seconds 00:08:16.024 00:08:16.024 real 0m0.392s 00:08:16.024 user 0m0.378s 00:08:16.024 sys 0m0.011s 00:08:16.024 22:48:52 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.024 22:48:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:16.024 ************************************ 00:08:16.024 END TEST env_memory 00:08:16.024 ************************************ 00:08:16.024 22:48:52 env -- common/autotest_common.sh@1142 -- # return 0 00:08:16.024 22:48:52 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:16.024 22:48:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.024 22:48:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.024 22:48:52 env -- common/autotest_common.sh@10 -- # set +x 00:08:16.024 ************************************ 00:08:16.024 START TEST env_vtophys 00:08:16.024 ************************************ 00:08:16.024 22:48:52 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:16.024 EAL: lib.eal log level changed from notice to debug 00:08:16.024 EAL: Detected lcore 0 as core 0 on socket 0 00:08:16.024 EAL: Detected lcore 1 as core 1 on socket 0 00:08:16.024 EAL: Detected lcore 2 as core 2 on socket 0 00:08:16.024 EAL: Detected lcore 3 as core 3 on socket 0 00:08:16.024 EAL: Detected lcore 4 as core 4 on socket 0 00:08:16.024 EAL: Detected lcore 5 as core 5 on socket 0 00:08:16.024 EAL: Detected lcore 6 as core 8 on socket 0 00:08:16.024 EAL: Detected lcore 7 as core 9 on socket 0 00:08:16.024 EAL: Detected lcore 8 as core 10 on socket 0 00:08:16.024 EAL: Detected lcore 9 as core 11 on socket 0 00:08:16.024 EAL: Detected lcore 10 as core 12 on socket 0 00:08:16.024 EAL: Detected lcore 11 as core 13 on socket 0 00:08:16.024 EAL: Detected lcore 12 as core 0 on socket 1 00:08:16.024 EAL: Detected lcore 13 as core 1 on socket 1 00:08:16.024 EAL: Detected lcore 14 as core 2 on socket 1 00:08:16.024 EAL: Detected lcore 15 as core 3 on socket 1 00:08:16.024 EAL: Detected lcore 16 as core 4 on socket 1 00:08:16.024 EAL: Detected lcore 17 as core 5 on socket 1 00:08:16.024 EAL: Detected lcore 18 as core 8 on socket 1 00:08:16.024 EAL: Detected lcore 19 as core 9 on socket 1 00:08:16.024 EAL: Detected lcore 20 as core 10 on socket 1 00:08:16.024 EAL: Detected lcore 21 as core 11 on socket 1 00:08:16.024 EAL: Detected lcore 22 as core 12 on socket 1 00:08:16.024 EAL: Detected lcore 23 as core 13 on socket 1 00:08:16.024 EAL: Detected lcore 24 as core 0 on socket 0 00:08:16.024 EAL: Detected lcore 25 as core 1 on socket 0 00:08:16.024 EAL: Detected lcore 26 as core 2 on socket 0 00:08:16.024 EAL: Detected lcore 27 as core 3 on socket 0 00:08:16.024 EAL: Detected lcore 28 as core 4 on socket 0 00:08:16.024 EAL: Detected lcore 29 as core 5 on socket 0 00:08:16.024 EAL: Detected lcore 30 as core 8 on socket 0 00:08:16.024 EAL: Detected lcore 31 as core 9 on socket 0 00:08:16.024 EAL: Detected lcore 32 as core 10 on socket 0 00:08:16.024 EAL: Detected lcore 33 as core 11 on socket 0 00:08:16.024 EAL: Detected lcore 34 as core 12 on socket 0 00:08:16.024 EAL: Detected lcore 35 as core 13 on socket 0 00:08:16.024 EAL: Detected lcore 36 as core 0 on socket 1 00:08:16.024 EAL: Detected lcore 37 as core 1 on socket 1 00:08:16.024 EAL: Detected lcore 38 as core 2 on socket 1 00:08:16.024 EAL: Detected lcore 39 as core 3 on socket 1 00:08:16.024 EAL: Detected lcore 40 as core 4 on socket 1 00:08:16.024 EAL: Detected lcore 41 as core 5 on socket 1 00:08:16.024 EAL: Detected lcore 42 as core 8 on socket 1 00:08:16.024 EAL: Detected lcore 43 as core 9 on socket 1 00:08:16.024 EAL: Detected lcore 44 as core 10 on socket 1 00:08:16.024 EAL: Detected lcore 45 as core 11 on socket 1 00:08:16.024 EAL: Detected lcore 46 as core 12 on socket 1 00:08:16.024 EAL: Detected lcore 47 as core 13 on socket 1 00:08:16.024 EAL: Maximum logical cores by configuration: 128 00:08:16.024 EAL: Detected CPU lcores: 48 00:08:16.024 EAL: Detected NUMA nodes: 2 00:08:16.024 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:08:16.024 EAL: Detected shared linkage of DPDK 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:08:16.025 EAL: Registered [vdev] bus. 00:08:16.025 EAL: bus.vdev log level changed from disabled to notice 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:08:16.025 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:16.025 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:08:16.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:08:16.025 EAL: No shared files mode enabled, IPC will be disabled 00:08:16.025 EAL: No shared files mode enabled, IPC is disabled 00:08:16.025 EAL: Bus pci wants IOVA as 'DC' 00:08:16.025 EAL: Bus vdev wants IOVA as 'DC' 00:08:16.025 EAL: Buses did not request a specific IOVA mode. 00:08:16.025 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:16.025 EAL: Selected IOVA mode 'VA' 00:08:16.025 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.025 EAL: Probing VFIO support... 00:08:16.025 EAL: IOMMU type 1 (Type 1) is supported 00:08:16.025 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:16.025 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:16.025 EAL: VFIO support initialized 00:08:16.025 EAL: Ask a virtual area of 0x2e000 bytes 00:08:16.025 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:16.025 EAL: Setting up physically contiguous memory... 00:08:16.025 EAL: Setting maximum number of open files to 524288 00:08:16.025 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:16.025 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:16.025 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:16.025 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:16.025 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.025 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:16.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.025 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.025 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:16.025 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:16.025 EAL: Hugepages will be freed exactly as allocated. 00:08:16.025 EAL: No shared files mode enabled, IPC is disabled 00:08:16.025 EAL: No shared files mode enabled, IPC is disabled 00:08:16.025 EAL: TSC frequency is ~2700000 KHz 00:08:16.025 EAL: Main lcore 0 is ready (tid=7f557cf9da00;cpuset=[0]) 00:08:16.025 EAL: Trying to obtain current memory policy. 00:08:16.025 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.025 EAL: Restoring previous memory policy: 0 00:08:16.025 EAL: request: mp_malloc_sync 00:08:16.025 EAL: No shared files mode enabled, IPC is disabled 00:08:16.025 EAL: Heap on socket 0 was expanded by 2MB 00:08:16.025 EAL: PCI device 0000:0e:00.0 on NUMA socket 0 00:08:16.025 EAL: probe driver: 8086:1583 net_i40e 00:08:16.025 EAL: Not managed by a supported kernel driver, skipped 00:08:16.025 EAL: PCI device 0000:0e:00.1 on NUMA socket 0 00:08:16.026 EAL: probe driver: 8086:1583 net_i40e 00:08:16.026 EAL: Not managed by a supported kernel driver, skipped 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:16.026 EAL: Mem event callback 'spdk:(nil)' registered 00:08:16.026 00:08:16.026 00:08:16.026 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.026 http://cunit.sourceforge.net/ 00:08:16.026 00:08:16.026 00:08:16.026 Suite: components_suite 00:08:16.026 Test: vtophys_malloc_test ...passed 00:08:16.026 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:16.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.026 EAL: Restoring previous memory policy: 4 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was expanded by 4MB 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was shrunk by 4MB 00:08:16.026 EAL: Trying to obtain current memory policy. 00:08:16.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.026 EAL: Restoring previous memory policy: 4 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was expanded by 6MB 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was shrunk by 6MB 00:08:16.026 EAL: Trying to obtain current memory policy. 00:08:16.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.026 EAL: Restoring previous memory policy: 4 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was expanded by 10MB 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was shrunk by 10MB 00:08:16.026 EAL: Trying to obtain current memory policy. 00:08:16.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.026 EAL: Restoring previous memory policy: 4 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was expanded by 18MB 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was shrunk by 18MB 00:08:16.026 EAL: Trying to obtain current memory policy. 00:08:16.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.026 EAL: Restoring previous memory policy: 4 00:08:16.026 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.026 EAL: request: mp_malloc_sync 00:08:16.026 EAL: No shared files mode enabled, IPC is disabled 00:08:16.026 EAL: Heap on socket 0 was expanded by 34MB 00:08:16.301 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.301 EAL: request: mp_malloc_sync 00:08:16.301 EAL: No shared files mode enabled, IPC is disabled 00:08:16.301 EAL: Heap on socket 0 was shrunk by 34MB 00:08:16.301 EAL: Trying to obtain current memory policy. 00:08:16.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.301 EAL: Restoring previous memory policy: 4 00:08:16.301 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.301 EAL: request: mp_malloc_sync 00:08:16.301 EAL: No shared files mode enabled, IPC is disabled 00:08:16.301 EAL: Heap on socket 0 was expanded by 66MB 00:08:16.301 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.301 EAL: request: mp_malloc_sync 00:08:16.301 EAL: No shared files mode enabled, IPC is disabled 00:08:16.301 EAL: Heap on socket 0 was shrunk by 66MB 00:08:16.301 EAL: Trying to obtain current memory policy. 00:08:16.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.301 EAL: Restoring previous memory policy: 4 00:08:16.301 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.301 EAL: request: mp_malloc_sync 00:08:16.301 EAL: No shared files mode enabled, IPC is disabled 00:08:16.301 EAL: Heap on socket 0 was expanded by 130MB 00:08:16.301 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.301 EAL: request: mp_malloc_sync 00:08:16.301 EAL: No shared files mode enabled, IPC is disabled 00:08:16.301 EAL: Heap on socket 0 was shrunk by 130MB 00:08:16.301 EAL: Trying to obtain current memory policy. 00:08:16.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.588 EAL: Restoring previous memory policy: 4 00:08:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.588 EAL: request: mp_malloc_sync 00:08:16.588 EAL: No shared files mode enabled, IPC is disabled 00:08:16.588 EAL: Heap on socket 0 was expanded by 258MB 00:08:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.588 EAL: request: mp_malloc_sync 00:08:16.588 EAL: No shared files mode enabled, IPC is disabled 00:08:16.588 EAL: Heap on socket 0 was shrunk by 258MB 00:08:16.588 EAL: Trying to obtain current memory policy. 00:08:16.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.847 EAL: Restoring previous memory policy: 4 00:08:16.848 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.848 EAL: request: mp_malloc_sync 00:08:16.848 EAL: No shared files mode enabled, IPC is disabled 00:08:16.848 EAL: Heap on socket 0 was expanded by 514MB 00:08:16.848 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.108 EAL: request: mp_malloc_sync 00:08:17.108 EAL: No shared files mode enabled, IPC is disabled 00:08:17.108 EAL: Heap on socket 0 was shrunk by 514MB 00:08:17.108 EAL: Trying to obtain current memory policy. 00:08:17.108 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.677 EAL: Restoring previous memory policy: 4 00:08:17.677 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.677 EAL: request: mp_malloc_sync 00:08:17.677 EAL: No shared files mode enabled, IPC is disabled 00:08:17.677 EAL: Heap on socket 0 was expanded by 1026MB 00:08:17.937 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.197 EAL: request: mp_malloc_sync 00:08:18.197 EAL: No shared files mode enabled, IPC is disabled 00:08:18.197 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:18.197 passed 00:08:18.197 00:08:18.197 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.197 suites 1 1 n/a 0 0 00:08:18.197 tests 2 2 2 0 0 00:08:18.197 asserts 497 497 497 0 n/a 00:08:18.197 00:08:18.197 Elapsed time = 1.979 seconds 00:08:18.197 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.197 EAL: request: mp_malloc_sync 00:08:18.197 EAL: No shared files mode enabled, IPC is disabled 00:08:18.197 EAL: Heap on socket 0 was shrunk by 2MB 00:08:18.197 EAL: No shared files mode enabled, IPC is disabled 00:08:18.197 EAL: No shared files mode enabled, IPC is disabled 00:08:18.197 EAL: No shared files mode enabled, IPC is disabled 00:08:18.197 00:08:18.197 real 0m2.154s 00:08:18.197 user 0m1.135s 00:08:18.197 sys 0m0.975s 00:08:18.197 22:48:54 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.197 22:48:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:18.197 ************************************ 00:08:18.197 END TEST env_vtophys 00:08:18.197 ************************************ 00:08:18.197 22:48:54 env -- common/autotest_common.sh@1142 -- # return 0 00:08:18.197 22:48:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:18.197 22:48:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.197 22:48:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.197 22:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:08:18.197 ************************************ 00:08:18.197 START TEST env_pci 00:08:18.197 ************************************ 00:08:18.197 22:48:54 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:18.197 00:08:18.197 00:08:18.197 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.197 http://cunit.sourceforge.net/ 00:08:18.197 00:08:18.197 00:08:18.197 Suite: pci 00:08:18.197 Test: pci_hook ...[2024-07-22 22:48:54.431560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 742368 has claimed it 00:08:18.197 EAL: Cannot find device (10000:00:01.0) 00:08:18.197 EAL: Failed to attach device on primary process 00:08:18.197 passed 00:08:18.197 00:08:18.197 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.197 suites 1 1 n/a 0 0 00:08:18.197 tests 1 1 1 0 0 00:08:18.197 asserts 25 25 25 0 n/a 00:08:18.197 00:08:18.197 Elapsed time = 0.048 seconds 00:08:18.197 00:08:18.197 real 0m0.069s 00:08:18.197 user 0m0.016s 00:08:18.197 sys 0m0.052s 00:08:18.197 22:48:54 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.197 22:48:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:18.197 ************************************ 00:08:18.197 END TEST env_pci 00:08:18.197 ************************************ 00:08:18.457 22:48:54 env -- common/autotest_common.sh@1142 -- # return 0 00:08:18.457 22:48:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:18.457 22:48:54 env -- env/env.sh@15 -- # uname 00:08:18.457 22:48:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:18.457 22:48:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:18.457 22:48:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:18.457 22:48:54 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:18.457 22:48:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.457 22:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:08:18.457 ************************************ 00:08:18.457 START TEST env_dpdk_post_init 00:08:18.457 ************************************ 00:08:18.457 22:48:54 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:18.457 EAL: Detected CPU lcores: 48 00:08:18.457 EAL: Detected NUMA nodes: 2 00:08:18.457 EAL: Detected shared linkage of DPDK 00:08:18.457 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:18.457 EAL: Selected IOVA mode 'VA' 00:08:18.457 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.457 EAL: VFIO support initialized 00:08:18.457 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:18.717 EAL: Using IOMMU type 1 (Type 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:08:18.717 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:08:19.656 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:08:22.946 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:08:22.946 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:08:22.946 Starting DPDK initialization... 00:08:22.946 Starting SPDK post initialization... 00:08:22.946 SPDK NVMe probe 00:08:22.946 Attaching to 0000:82:00.0 00:08:22.946 Attached to 0000:82:00.0 00:08:22.946 Cleaning up... 00:08:22.946 00:08:22.946 real 0m4.544s 00:08:22.946 user 0m3.307s 00:08:22.946 sys 0m0.286s 00:08:22.946 22:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.946 22:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:22.946 ************************************ 00:08:22.946 END TEST env_dpdk_post_init 00:08:22.946 ************************************ 00:08:22.946 22:48:59 env -- common/autotest_common.sh@1142 -- # return 0 00:08:22.946 22:48:59 env -- env/env.sh@26 -- # uname 00:08:22.946 22:48:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:22.946 22:48:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:22.946 22:48:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:22.946 22:48:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.946 22:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:22.946 ************************************ 00:08:22.946 START TEST env_mem_callbacks 00:08:22.946 ************************************ 00:08:22.946 22:48:59 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:22.946 EAL: Detected CPU lcores: 48 00:08:22.946 EAL: Detected NUMA nodes: 2 00:08:22.946 EAL: Detected shared linkage of DPDK 00:08:22.946 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:23.205 EAL: Selected IOVA mode 'VA' 00:08:23.205 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.205 EAL: VFIO support initialized 00:08:23.205 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:23.205 00:08:23.205 00:08:23.205 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.205 http://cunit.sourceforge.net/ 00:08:23.205 00:08:23.205 00:08:23.205 Suite: memory 00:08:23.205 Test: test ... 00:08:23.205 register 0x200000200000 2097152 00:08:23.205 malloc 3145728 00:08:23.205 register 0x200000400000 4194304 00:08:23.205 buf 0x200000500000 len 3145728 PASSED 00:08:23.205 malloc 64 00:08:23.205 buf 0x2000004fff40 len 64 PASSED 00:08:23.205 malloc 4194304 00:08:23.205 register 0x200000800000 6291456 00:08:23.205 buf 0x200000a00000 len 4194304 PASSED 00:08:23.205 free 0x200000500000 3145728 00:08:23.205 free 0x2000004fff40 64 00:08:23.205 unregister 0x200000400000 4194304 PASSED 00:08:23.205 free 0x200000a00000 4194304 00:08:23.205 unregister 0x200000800000 6291456 PASSED 00:08:23.205 malloc 8388608 00:08:23.205 register 0x200000400000 10485760 00:08:23.205 buf 0x200000600000 len 8388608 PASSED 00:08:23.205 free 0x200000600000 8388608 00:08:23.205 unregister 0x200000400000 10485760 PASSED 00:08:23.205 passed 00:08:23.205 00:08:23.205 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.205 suites 1 1 n/a 0 0 00:08:23.205 tests 1 1 1 0 0 00:08:23.205 asserts 15 15 15 0 n/a 00:08:23.205 00:08:23.206 Elapsed time = 0.008 seconds 00:08:23.206 00:08:23.206 real 0m0.100s 00:08:23.206 user 0m0.021s 00:08:23.206 sys 0m0.078s 00:08:23.206 22:48:59 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.206 22:48:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:23.206 ************************************ 00:08:23.206 END TEST env_mem_callbacks 00:08:23.206 ************************************ 00:08:23.206 22:48:59 env -- common/autotest_common.sh@1142 -- # return 0 00:08:23.206 00:08:23.206 real 0m7.726s 00:08:23.206 user 0m5.045s 00:08:23.206 sys 0m1.710s 00:08:23.206 22:48:59 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.206 22:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:23.206 ************************************ 00:08:23.206 END TEST env 00:08:23.206 ************************************ 00:08:23.206 22:48:59 -- common/autotest_common.sh@1142 -- # return 0 00:08:23.206 22:48:59 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:23.206 22:48:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.206 22:48:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.206 22:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:23.206 ************************************ 00:08:23.206 START TEST rpc 00:08:23.206 ************************************ 00:08:23.206 22:48:59 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:23.206 * Looking for test storage... 00:08:23.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:23.206 22:48:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=743025 00:08:23.206 22:48:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:23.206 22:48:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:23.206 22:48:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 743025 00:08:23.206 22:48:59 rpc -- common/autotest_common.sh@829 -- # '[' -z 743025 ']' 00:08:23.206 22:48:59 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.206 22:48:59 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.206 22:48:59 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.206 22:48:59 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.206 22:48:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.465 [2024-07-22 22:48:59.600302] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:23.465 [2024-07-22 22:48:59.600511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743025 ] 00:08:23.465 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.465 [2024-07-22 22:48:59.720970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.726 [2024-07-22 22:48:59.842497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:23.726 [2024-07-22 22:48:59.842614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 743025' to capture a snapshot of events at runtime. 00:08:23.726 [2024-07-22 22:48:59.842649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.726 [2024-07-22 22:48:59.842679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.726 [2024-07-22 22:48:59.842705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid743025 for offline analysis/debug. 00:08:23.726 [2024-07-22 22:48:59.842786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.986 22:49:00 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.986 22:49:00 rpc -- common/autotest_common.sh@862 -- # return 0 00:08:23.986 22:49:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:23.986 22:49:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:23.986 22:49:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:23.986 22:49:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:23.986 22:49:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.986 22:49:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.986 22:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.246 ************************************ 00:08:24.246 START TEST rpc_integrity 00:08:24.246 ************************************ 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:24.246 { 00:08:24.246 "name": "Malloc0", 00:08:24.246 "aliases": [ 00:08:24.246 "458992ef-57e2-41be-95b8-cf238410f5a7" 00:08:24.246 ], 00:08:24.246 "product_name": "Malloc disk", 00:08:24.246 "block_size": 512, 00:08:24.246 "num_blocks": 16384, 00:08:24.246 "uuid": "458992ef-57e2-41be-95b8-cf238410f5a7", 00:08:24.246 "assigned_rate_limits": { 00:08:24.246 "rw_ios_per_sec": 0, 00:08:24.246 "rw_mbytes_per_sec": 0, 00:08:24.246 "r_mbytes_per_sec": 0, 00:08:24.246 "w_mbytes_per_sec": 0 00:08:24.246 }, 00:08:24.246 "claimed": false, 00:08:24.246 "zoned": false, 00:08:24.246 "supported_io_types": { 00:08:24.246 "read": true, 00:08:24.246 "write": true, 00:08:24.246 "unmap": true, 00:08:24.246 "flush": true, 00:08:24.246 "reset": true, 00:08:24.246 "nvme_admin": false, 00:08:24.246 "nvme_io": false, 00:08:24.246 "nvme_io_md": false, 00:08:24.246 "write_zeroes": true, 00:08:24.246 "zcopy": true, 00:08:24.246 "get_zone_info": false, 00:08:24.246 "zone_management": false, 00:08:24.246 "zone_append": false, 00:08:24.246 "compare": false, 00:08:24.246 "compare_and_write": false, 00:08:24.246 "abort": true, 00:08:24.246 "seek_hole": false, 00:08:24.246 "seek_data": false, 00:08:24.246 "copy": true, 00:08:24.246 "nvme_iov_md": false 00:08:24.246 }, 00:08:24.246 "memory_domains": [ 00:08:24.246 { 00:08:24.246 "dma_device_id": "system", 00:08:24.246 "dma_device_type": 1 00:08:24.246 }, 00:08:24.246 { 00:08:24.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.246 "dma_device_type": 2 00:08:24.246 } 00:08:24.246 ], 00:08:24.246 "driver_specific": {} 00:08:24.246 } 00:08:24.246 ]' 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:24.246 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.246 [2024-07-22 22:49:00.512716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:24.246 [2024-07-22 22:49:00.512813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.246 [2024-07-22 22:49:00.512868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x149c0b0 00:08:24.246 [2024-07-22 22:49:00.512903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.246 [2024-07-22 22:49:00.515691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.246 [2024-07-22 22:49:00.515756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:24.246 Passthru0 00:08:24.246 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.247 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:24.247 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.247 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.247 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.247 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:24.247 { 00:08:24.247 "name": "Malloc0", 00:08:24.247 "aliases": [ 00:08:24.247 "458992ef-57e2-41be-95b8-cf238410f5a7" 00:08:24.247 ], 00:08:24.247 "product_name": "Malloc disk", 00:08:24.247 "block_size": 512, 00:08:24.247 "num_blocks": 16384, 00:08:24.247 "uuid": "458992ef-57e2-41be-95b8-cf238410f5a7", 00:08:24.247 "assigned_rate_limits": { 00:08:24.247 "rw_ios_per_sec": 0, 00:08:24.247 "rw_mbytes_per_sec": 0, 00:08:24.247 "r_mbytes_per_sec": 0, 00:08:24.247 "w_mbytes_per_sec": 0 00:08:24.247 }, 00:08:24.247 "claimed": true, 00:08:24.247 "claim_type": "exclusive_write", 00:08:24.247 "zoned": false, 00:08:24.247 "supported_io_types": { 00:08:24.247 "read": true, 00:08:24.247 "write": true, 00:08:24.247 "unmap": true, 00:08:24.247 "flush": true, 00:08:24.247 "reset": true, 00:08:24.247 "nvme_admin": false, 00:08:24.247 "nvme_io": false, 00:08:24.247 "nvme_io_md": false, 00:08:24.247 "write_zeroes": true, 00:08:24.247 "zcopy": true, 00:08:24.247 "get_zone_info": false, 00:08:24.247 "zone_management": false, 00:08:24.247 "zone_append": false, 00:08:24.247 "compare": false, 00:08:24.247 "compare_and_write": false, 00:08:24.247 "abort": true, 00:08:24.247 "seek_hole": false, 00:08:24.247 "seek_data": false, 00:08:24.247 "copy": true, 00:08:24.247 "nvme_iov_md": false 00:08:24.247 }, 00:08:24.247 "memory_domains": [ 00:08:24.247 { 00:08:24.247 "dma_device_id": "system", 00:08:24.247 "dma_device_type": 1 00:08:24.247 }, 00:08:24.247 { 00:08:24.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.247 "dma_device_type": 2 00:08:24.247 } 00:08:24.247 ], 00:08:24.247 "driver_specific": {} 00:08:24.247 }, 00:08:24.247 { 00:08:24.247 "name": "Passthru0", 00:08:24.247 "aliases": [ 00:08:24.247 "70d63908-4298-5d1c-95d9-aeecf9711bba" 00:08:24.247 ], 00:08:24.247 "product_name": "passthru", 00:08:24.247 "block_size": 512, 00:08:24.247 "num_blocks": 16384, 00:08:24.247 "uuid": "70d63908-4298-5d1c-95d9-aeecf9711bba", 00:08:24.247 "assigned_rate_limits": { 00:08:24.247 "rw_ios_per_sec": 0, 00:08:24.247 "rw_mbytes_per_sec": 0, 00:08:24.247 "r_mbytes_per_sec": 0, 00:08:24.247 "w_mbytes_per_sec": 0 00:08:24.247 }, 00:08:24.247 "claimed": false, 00:08:24.247 "zoned": false, 00:08:24.247 "supported_io_types": { 00:08:24.247 "read": true, 00:08:24.247 "write": true, 00:08:24.247 "unmap": true, 00:08:24.247 "flush": true, 00:08:24.247 "reset": true, 00:08:24.247 "nvme_admin": false, 00:08:24.247 "nvme_io": false, 00:08:24.247 "nvme_io_md": false, 00:08:24.247 "write_zeroes": true, 00:08:24.247 "zcopy": true, 00:08:24.247 "get_zone_info": false, 00:08:24.247 "zone_management": false, 00:08:24.247 "zone_append": false, 00:08:24.247 "compare": false, 00:08:24.247 "compare_and_write": false, 00:08:24.247 "abort": true, 00:08:24.247 "seek_hole": false, 00:08:24.247 "seek_data": false, 00:08:24.247 "copy": true, 00:08:24.247 "nvme_iov_md": false 00:08:24.247 }, 00:08:24.247 "memory_domains": [ 00:08:24.247 { 00:08:24.247 "dma_device_id": "system", 00:08:24.247 "dma_device_type": 1 00:08:24.247 }, 00:08:24.247 { 00:08:24.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.247 "dma_device_type": 2 00:08:24.247 } 00:08:24.247 ], 00:08:24.247 "driver_specific": { 00:08:24.247 "passthru": { 00:08:24.247 "name": "Passthru0", 00:08:24.247 "base_bdev_name": "Malloc0" 00:08:24.247 } 00:08:24.247 } 00:08:24.247 } 00:08:24.247 ]' 00:08:24.247 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:24.507 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:24.507 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.507 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.507 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.507 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:24.507 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:24.507 22:49:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:24.507 00:08:24.507 real 0m0.431s 00:08:24.507 user 0m0.320s 00:08:24.507 sys 0m0.042s 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.507 22:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 ************************************ 00:08:24.507 END TEST rpc_integrity 00:08:24.507 ************************************ 00:08:24.507 22:49:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:24.507 22:49:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:24.507 22:49:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:24.507 22:49:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.507 22:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.768 ************************************ 00:08:24.768 START TEST rpc_plugins 00:08:24.768 ************************************ 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:24.768 { 00:08:24.768 "name": "Malloc1", 00:08:24.768 "aliases": [ 00:08:24.768 "874ca48e-f4ad-463f-9010-6f2f64be4240" 00:08:24.768 ], 00:08:24.768 "product_name": "Malloc disk", 00:08:24.768 "block_size": 4096, 00:08:24.768 "num_blocks": 256, 00:08:24.768 "uuid": "874ca48e-f4ad-463f-9010-6f2f64be4240", 00:08:24.768 "assigned_rate_limits": { 00:08:24.768 "rw_ios_per_sec": 0, 00:08:24.768 "rw_mbytes_per_sec": 0, 00:08:24.768 "r_mbytes_per_sec": 0, 00:08:24.768 "w_mbytes_per_sec": 0 00:08:24.768 }, 00:08:24.768 "claimed": false, 00:08:24.768 "zoned": false, 00:08:24.768 "supported_io_types": { 00:08:24.768 "read": true, 00:08:24.768 "write": true, 00:08:24.768 "unmap": true, 00:08:24.768 "flush": true, 00:08:24.768 "reset": true, 00:08:24.768 "nvme_admin": false, 00:08:24.768 "nvme_io": false, 00:08:24.768 "nvme_io_md": false, 00:08:24.768 "write_zeroes": true, 00:08:24.768 "zcopy": true, 00:08:24.768 "get_zone_info": false, 00:08:24.768 "zone_management": false, 00:08:24.768 "zone_append": false, 00:08:24.768 "compare": false, 00:08:24.768 "compare_and_write": false, 00:08:24.768 "abort": true, 00:08:24.768 "seek_hole": false, 00:08:24.768 "seek_data": false, 00:08:24.768 "copy": true, 00:08:24.768 "nvme_iov_md": false 00:08:24.768 }, 00:08:24.768 "memory_domains": [ 00:08:24.768 { 00:08:24.768 "dma_device_id": "system", 00:08:24.768 "dma_device_type": 1 00:08:24.768 }, 00:08:24.768 { 00:08:24.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.768 "dma_device_type": 2 00:08:24.768 } 00:08:24.768 ], 00:08:24.768 "driver_specific": {} 00:08:24.768 } 00:08:24.768 ]' 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:24.768 22:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:24.768 22:49:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:24.768 22:49:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:24.768 00:08:24.768 real 0m0.205s 00:08:24.768 user 0m0.152s 00:08:24.768 sys 0m0.020s 00:08:24.768 22:49:01 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.768 22:49:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:24.768 ************************************ 00:08:24.768 END TEST rpc_plugins 00:08:24.768 ************************************ 00:08:24.768 22:49:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:24.768 22:49:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:24.768 22:49:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:24.768 22:49:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.768 22:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 ************************************ 00:08:25.028 START TEST rpc_trace_cmd_test 00:08:25.028 ************************************ 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:25.028 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid743025", 00:08:25.028 "tpoint_group_mask": "0x8", 00:08:25.028 "iscsi_conn": { 00:08:25.028 "mask": "0x2", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "scsi": { 00:08:25.028 "mask": "0x4", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "bdev": { 00:08:25.028 "mask": "0x8", 00:08:25.028 "tpoint_mask": "0xffffffffffffffff" 00:08:25.028 }, 00:08:25.028 "nvmf_rdma": { 00:08:25.028 "mask": "0x10", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "nvmf_tcp": { 00:08:25.028 "mask": "0x20", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "ftl": { 00:08:25.028 "mask": "0x40", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "blobfs": { 00:08:25.028 "mask": "0x80", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "dsa": { 00:08:25.028 "mask": "0x200", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "thread": { 00:08:25.028 "mask": "0x400", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "nvme_pcie": { 00:08:25.028 "mask": "0x800", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "iaa": { 00:08:25.028 "mask": "0x1000", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "nvme_tcp": { 00:08:25.028 "mask": "0x2000", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "bdev_nvme": { 00:08:25.028 "mask": "0x4000", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 }, 00:08:25.028 "sock": { 00:08:25.028 "mask": "0x8000", 00:08:25.028 "tpoint_mask": "0x0" 00:08:25.028 } 00:08:25.028 }' 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:25.028 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:25.288 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:25.288 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:25.288 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:25.288 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:25.288 22:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:25.288 00:08:25.288 real 0m0.400s 00:08:25.288 user 0m0.357s 00:08:25.288 sys 0m0.032s 00:08:25.288 22:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.288 22:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.288 ************************************ 00:08:25.288 END TEST rpc_trace_cmd_test 00:08:25.288 ************************************ 00:08:25.288 22:49:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:25.288 22:49:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:25.288 22:49:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:25.288 22:49:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:25.288 22:49:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:25.288 22:49:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.288 22:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.288 ************************************ 00:08:25.288 START TEST rpc_daemon_integrity 00:08:25.288 ************************************ 00:08:25.288 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:08:25.288 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:25.288 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.288 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.288 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.288 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:25.549 { 00:08:25.549 "name": "Malloc2", 00:08:25.549 "aliases": [ 00:08:25.549 "6f82f2f7-24f0-4883-b0bc-ffcad64ad5b9" 00:08:25.549 ], 00:08:25.549 "product_name": "Malloc disk", 00:08:25.549 "block_size": 512, 00:08:25.549 "num_blocks": 16384, 00:08:25.549 "uuid": "6f82f2f7-24f0-4883-b0bc-ffcad64ad5b9", 00:08:25.549 "assigned_rate_limits": { 00:08:25.549 "rw_ios_per_sec": 0, 00:08:25.549 "rw_mbytes_per_sec": 0, 00:08:25.549 "r_mbytes_per_sec": 0, 00:08:25.549 "w_mbytes_per_sec": 0 00:08:25.549 }, 00:08:25.549 "claimed": false, 00:08:25.549 "zoned": false, 00:08:25.549 "supported_io_types": { 00:08:25.549 "read": true, 00:08:25.549 "write": true, 00:08:25.549 "unmap": true, 00:08:25.549 "flush": true, 00:08:25.549 "reset": true, 00:08:25.549 "nvme_admin": false, 00:08:25.549 "nvme_io": false, 00:08:25.549 "nvme_io_md": false, 00:08:25.549 "write_zeroes": true, 00:08:25.549 "zcopy": true, 00:08:25.549 "get_zone_info": false, 00:08:25.549 "zone_management": false, 00:08:25.549 "zone_append": false, 00:08:25.549 "compare": false, 00:08:25.549 "compare_and_write": false, 00:08:25.549 "abort": true, 00:08:25.549 "seek_hole": false, 00:08:25.549 "seek_data": false, 00:08:25.549 "copy": true, 00:08:25.549 "nvme_iov_md": false 00:08:25.549 }, 00:08:25.549 "memory_domains": [ 00:08:25.549 { 00:08:25.549 "dma_device_id": "system", 00:08:25.549 "dma_device_type": 1 00:08:25.549 }, 00:08:25.549 { 00:08:25.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.549 "dma_device_type": 2 00:08:25.549 } 00:08:25.549 ], 00:08:25.549 "driver_specific": {} 00:08:25.549 } 00:08:25.549 ]' 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.549 [2024-07-22 22:49:01.801243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:25.549 [2024-07-22 22:49:01.801368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.549 [2024-07-22 22:49:01.801403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12eb890 00:08:25.549 [2024-07-22 22:49:01.801424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.549 [2024-07-22 22:49:01.803993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.549 [2024-07-22 22:49:01.804055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:25.549 Passthru0 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.549 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:25.549 { 00:08:25.549 "name": "Malloc2", 00:08:25.549 "aliases": [ 00:08:25.549 "6f82f2f7-24f0-4883-b0bc-ffcad64ad5b9" 00:08:25.549 ], 00:08:25.549 "product_name": "Malloc disk", 00:08:25.549 "block_size": 512, 00:08:25.549 "num_blocks": 16384, 00:08:25.549 "uuid": "6f82f2f7-24f0-4883-b0bc-ffcad64ad5b9", 00:08:25.549 "assigned_rate_limits": { 00:08:25.549 "rw_ios_per_sec": 0, 00:08:25.549 "rw_mbytes_per_sec": 0, 00:08:25.549 "r_mbytes_per_sec": 0, 00:08:25.549 "w_mbytes_per_sec": 0 00:08:25.549 }, 00:08:25.549 "claimed": true, 00:08:25.549 "claim_type": "exclusive_write", 00:08:25.549 "zoned": false, 00:08:25.549 "supported_io_types": { 00:08:25.549 "read": true, 00:08:25.549 "write": true, 00:08:25.549 "unmap": true, 00:08:25.549 "flush": true, 00:08:25.549 "reset": true, 00:08:25.549 "nvme_admin": false, 00:08:25.549 "nvme_io": false, 00:08:25.549 "nvme_io_md": false, 00:08:25.549 "write_zeroes": true, 00:08:25.549 "zcopy": true, 00:08:25.549 "get_zone_info": false, 00:08:25.549 "zone_management": false, 00:08:25.549 "zone_append": false, 00:08:25.549 "compare": false, 00:08:25.549 "compare_and_write": false, 00:08:25.549 "abort": true, 00:08:25.549 "seek_hole": false, 00:08:25.549 "seek_data": false, 00:08:25.549 "copy": true, 00:08:25.549 "nvme_iov_md": false 00:08:25.549 }, 00:08:25.549 "memory_domains": [ 00:08:25.549 { 00:08:25.549 "dma_device_id": "system", 00:08:25.549 "dma_device_type": 1 00:08:25.549 }, 00:08:25.549 { 00:08:25.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.549 "dma_device_type": 2 00:08:25.549 } 00:08:25.549 ], 00:08:25.549 "driver_specific": {} 00:08:25.549 }, 00:08:25.549 { 00:08:25.549 "name": "Passthru0", 00:08:25.549 "aliases": [ 00:08:25.549 "3a230465-eacb-510f-afa8-4c51463c77f4" 00:08:25.549 ], 00:08:25.549 "product_name": "passthru", 00:08:25.549 "block_size": 512, 00:08:25.549 "num_blocks": 16384, 00:08:25.549 "uuid": "3a230465-eacb-510f-afa8-4c51463c77f4", 00:08:25.549 "assigned_rate_limits": { 00:08:25.549 "rw_ios_per_sec": 0, 00:08:25.549 "rw_mbytes_per_sec": 0, 00:08:25.549 "r_mbytes_per_sec": 0, 00:08:25.549 "w_mbytes_per_sec": 0 00:08:25.549 }, 00:08:25.549 "claimed": false, 00:08:25.549 "zoned": false, 00:08:25.549 "supported_io_types": { 00:08:25.549 "read": true, 00:08:25.549 "write": true, 00:08:25.549 "unmap": true, 00:08:25.549 "flush": true, 00:08:25.549 "reset": true, 00:08:25.549 "nvme_admin": false, 00:08:25.549 "nvme_io": false, 00:08:25.549 "nvme_io_md": false, 00:08:25.549 "write_zeroes": true, 00:08:25.549 "zcopy": true, 00:08:25.549 "get_zone_info": false, 00:08:25.549 "zone_management": false, 00:08:25.549 "zone_append": false, 00:08:25.549 "compare": false, 00:08:25.549 "compare_and_write": false, 00:08:25.549 "abort": true, 00:08:25.549 "seek_hole": false, 00:08:25.549 "seek_data": false, 00:08:25.549 "copy": true, 00:08:25.549 "nvme_iov_md": false 00:08:25.550 }, 00:08:25.550 "memory_domains": [ 00:08:25.550 { 00:08:25.550 "dma_device_id": "system", 00:08:25.550 "dma_device_type": 1 00:08:25.550 }, 00:08:25.550 { 00:08:25.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.550 "dma_device_type": 2 00:08:25.550 } 00:08:25.550 ], 00:08:25.550 "driver_specific": { 00:08:25.550 "passthru": { 00:08:25.550 "name": "Passthru0", 00:08:25.550 "base_bdev_name": "Malloc2" 00:08:25.550 } 00:08:25.550 } 00:08:25.550 } 00:08:25.550 ]' 00:08:25.550 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:25.809 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:25.809 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:25.809 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.809 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.809 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.809 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:25.810 22:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:25.810 22:49:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:25.810 00:08:25.810 real 0m0.441s 00:08:25.810 user 0m0.333s 00:08:25.810 sys 0m0.040s 00:08:25.810 22:49:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.810 22:49:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:25.810 ************************************ 00:08:25.810 END TEST rpc_daemon_integrity 00:08:25.810 ************************************ 00:08:25.810 22:49:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:25.810 22:49:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:25.810 22:49:02 rpc -- rpc/rpc.sh@84 -- # killprocess 743025 00:08:25.810 22:49:02 rpc -- common/autotest_common.sh@948 -- # '[' -z 743025 ']' 00:08:25.810 22:49:02 rpc -- common/autotest_common.sh@952 -- # kill -0 743025 00:08:25.810 22:49:02 rpc -- common/autotest_common.sh@953 -- # uname 00:08:25.810 22:49:02 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:25.810 22:49:02 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 743025 00:08:26.070 22:49:02 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:26.070 22:49:02 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:26.070 22:49:02 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 743025' 00:08:26.070 killing process with pid 743025 00:08:26.070 22:49:02 rpc -- common/autotest_common.sh@967 -- # kill 743025 00:08:26.070 22:49:02 rpc -- common/autotest_common.sh@972 -- # wait 743025 00:08:26.639 00:08:26.639 real 0m3.277s 00:08:26.639 user 0m4.492s 00:08:26.639 sys 0m1.056s 00:08:26.639 22:49:02 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.639 22:49:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.639 ************************************ 00:08:26.639 END TEST rpc 00:08:26.639 ************************************ 00:08:26.639 22:49:02 -- common/autotest_common.sh@1142 -- # return 0 00:08:26.639 22:49:02 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:26.639 22:49:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.639 22:49:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.639 22:49:02 -- common/autotest_common.sh@10 -- # set +x 00:08:26.639 ************************************ 00:08:26.639 START TEST skip_rpc 00:08:26.639 ************************************ 00:08:26.639 22:49:02 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:26.639 * Looking for test storage... 00:08:26.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:26.639 22:49:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:26.639 22:49:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:26.639 22:49:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:26.639 22:49:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.639 22:49:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.639 22:49:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.639 ************************************ 00:08:26.639 START TEST skip_rpc 00:08:26.639 ************************************ 00:08:26.639 22:49:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:08:26.639 22:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=743591 00:08:26.639 22:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:26.639 22:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:26.639 22:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:26.899 [2024-07-22 22:49:02.959071] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:26.899 [2024-07-22 22:49:02.959181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743591 ] 00:08:26.899 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.899 [2024-07-22 22:49:03.061625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.160 [2024-07-22 22:49:03.212617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 743591 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 743591 ']' 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 743591 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 743591 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 743591' 00:08:32.437 killing process with pid 743591 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 743591 00:08:32.437 22:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 743591 00:08:32.437 00:08:32.437 real 0m5.620s 00:08:32.437 user 0m5.089s 00:08:32.437 sys 0m0.534s 00:08:32.437 22:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.437 22:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.437 ************************************ 00:08:32.437 END TEST skip_rpc 00:08:32.437 ************************************ 00:08:32.437 22:49:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:32.437 22:49:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:32.437 22:49:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.437 22:49:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.437 22:49:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.437 ************************************ 00:08:32.437 START TEST skip_rpc_with_json 00:08:32.437 ************************************ 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=744186 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 744186 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 744186 ']' 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.437 22:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:32.437 [2024-07-22 22:49:08.704803] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:32.437 [2024-07-22 22:49:08.704992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744186 ] 00:08:32.697 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.697 [2024-07-22 22:49:08.840158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.697 [2024-07-22 22:49:08.991974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.266 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.266 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:08:33.266 22:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:33.266 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.266 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:33.266 [2024-07-22 22:49:09.407494] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:33.266 request: 00:08:33.266 { 00:08:33.266 "trtype": "tcp", 00:08:33.266 "method": "nvmf_get_transports", 00:08:33.267 "req_id": 1 00:08:33.267 } 00:08:33.267 Got JSON-RPC error response 00:08:33.267 response: 00:08:33.267 { 00:08:33.267 "code": -19, 00:08:33.267 "message": "No such device" 00:08:33.267 } 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:33.267 [2024-07-22 22:49:09.419710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.267 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:33.526 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.526 22:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:33.526 { 00:08:33.526 "subsystems": [ 00:08:33.526 { 00:08:33.526 "subsystem": "vfio_user_target", 00:08:33.526 "config": null 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "keyring", 00:08:33.526 "config": [] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "iobuf", 00:08:33.526 "config": [ 00:08:33.526 { 00:08:33.526 "method": "iobuf_set_options", 00:08:33.526 "params": { 00:08:33.526 "small_pool_count": 8192, 00:08:33.526 "large_pool_count": 1024, 00:08:33.526 "small_bufsize": 8192, 00:08:33.526 "large_bufsize": 135168 00:08:33.526 } 00:08:33.526 } 00:08:33.526 ] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "sock", 00:08:33.526 "config": [ 00:08:33.526 { 00:08:33.526 "method": "sock_set_default_impl", 00:08:33.526 "params": { 00:08:33.526 "impl_name": "posix" 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "sock_impl_set_options", 00:08:33.526 "params": { 00:08:33.526 "impl_name": "ssl", 00:08:33.526 "recv_buf_size": 4096, 00:08:33.526 "send_buf_size": 4096, 00:08:33.526 "enable_recv_pipe": true, 00:08:33.526 "enable_quickack": false, 00:08:33.526 "enable_placement_id": 0, 00:08:33.526 "enable_zerocopy_send_server": true, 00:08:33.526 "enable_zerocopy_send_client": false, 00:08:33.526 "zerocopy_threshold": 0, 00:08:33.526 "tls_version": 0, 00:08:33.526 "enable_ktls": false 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "sock_impl_set_options", 00:08:33.526 "params": { 00:08:33.526 "impl_name": "posix", 00:08:33.526 "recv_buf_size": 2097152, 00:08:33.526 "send_buf_size": 2097152, 00:08:33.526 "enable_recv_pipe": true, 00:08:33.526 "enable_quickack": false, 00:08:33.526 "enable_placement_id": 0, 00:08:33.526 "enable_zerocopy_send_server": true, 00:08:33.526 "enable_zerocopy_send_client": false, 00:08:33.526 "zerocopy_threshold": 0, 00:08:33.526 "tls_version": 0, 00:08:33.526 "enable_ktls": false 00:08:33.526 } 00:08:33.526 } 00:08:33.526 ] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "vmd", 00:08:33.526 "config": [] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "accel", 00:08:33.526 "config": [ 00:08:33.526 { 00:08:33.526 "method": "accel_set_options", 00:08:33.526 "params": { 00:08:33.526 "small_cache_size": 128, 00:08:33.526 "large_cache_size": 16, 00:08:33.526 "task_count": 2048, 00:08:33.526 "sequence_count": 2048, 00:08:33.526 "buf_count": 2048 00:08:33.526 } 00:08:33.526 } 00:08:33.526 ] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "bdev", 00:08:33.526 "config": [ 00:08:33.526 { 00:08:33.526 "method": "bdev_set_options", 00:08:33.526 "params": { 00:08:33.526 "bdev_io_pool_size": 65535, 00:08:33.526 "bdev_io_cache_size": 256, 00:08:33.526 "bdev_auto_examine": true, 00:08:33.526 "iobuf_small_cache_size": 128, 00:08:33.526 "iobuf_large_cache_size": 16 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "bdev_raid_set_options", 00:08:33.526 "params": { 00:08:33.526 "process_window_size_kb": 1024, 00:08:33.526 "process_max_bandwidth_mb_sec": 0 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "bdev_iscsi_set_options", 00:08:33.526 "params": { 00:08:33.526 "timeout_sec": 30 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "bdev_nvme_set_options", 00:08:33.526 "params": { 00:08:33.526 "action_on_timeout": "none", 00:08:33.526 "timeout_us": 0, 00:08:33.526 "timeout_admin_us": 0, 00:08:33.526 "keep_alive_timeout_ms": 10000, 00:08:33.526 "arbitration_burst": 0, 00:08:33.526 "low_priority_weight": 0, 00:08:33.526 "medium_priority_weight": 0, 00:08:33.526 "high_priority_weight": 0, 00:08:33.526 "nvme_adminq_poll_period_us": 10000, 00:08:33.526 "nvme_ioq_poll_period_us": 0, 00:08:33.526 "io_queue_requests": 0, 00:08:33.526 "delay_cmd_submit": true, 00:08:33.526 "transport_retry_count": 4, 00:08:33.526 "bdev_retry_count": 3, 00:08:33.526 "transport_ack_timeout": 0, 00:08:33.526 "ctrlr_loss_timeout_sec": 0, 00:08:33.526 "reconnect_delay_sec": 0, 00:08:33.526 "fast_io_fail_timeout_sec": 0, 00:08:33.526 "disable_auto_failback": false, 00:08:33.526 "generate_uuids": false, 00:08:33.526 "transport_tos": 0, 00:08:33.526 "nvme_error_stat": false, 00:08:33.526 "rdma_srq_size": 0, 00:08:33.526 "io_path_stat": false, 00:08:33.526 "allow_accel_sequence": false, 00:08:33.526 "rdma_max_cq_size": 0, 00:08:33.526 "rdma_cm_event_timeout_ms": 0, 00:08:33.526 "dhchap_digests": [ 00:08:33.526 "sha256", 00:08:33.526 "sha384", 00:08:33.526 "sha512" 00:08:33.526 ], 00:08:33.526 "dhchap_dhgroups": [ 00:08:33.526 "null", 00:08:33.526 "ffdhe2048", 00:08:33.526 "ffdhe3072", 00:08:33.526 "ffdhe4096", 00:08:33.526 "ffdhe6144", 00:08:33.526 "ffdhe8192" 00:08:33.526 ] 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "bdev_nvme_set_hotplug", 00:08:33.526 "params": { 00:08:33.526 "period_us": 100000, 00:08:33.526 "enable": false 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "bdev_wait_for_examine" 00:08:33.526 } 00:08:33.526 ] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "scsi", 00:08:33.526 "config": null 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "scheduler", 00:08:33.526 "config": [ 00:08:33.526 { 00:08:33.526 "method": "framework_set_scheduler", 00:08:33.526 "params": { 00:08:33.526 "name": "static" 00:08:33.526 } 00:08:33.526 } 00:08:33.526 ] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "vhost_scsi", 00:08:33.526 "config": [] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "vhost_blk", 00:08:33.526 "config": [] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "ublk", 00:08:33.526 "config": [] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "nbd", 00:08:33.526 "config": [] 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "subsystem": "nvmf", 00:08:33.526 "config": [ 00:08:33.526 { 00:08:33.526 "method": "nvmf_set_config", 00:08:33.526 "params": { 00:08:33.526 "discovery_filter": "match_any", 00:08:33.526 "admin_cmd_passthru": { 00:08:33.526 "identify_ctrlr": false 00:08:33.526 } 00:08:33.526 } 00:08:33.526 }, 00:08:33.526 { 00:08:33.526 "method": "nvmf_set_max_subsystems", 00:08:33.526 "params": { 00:08:33.527 "max_subsystems": 1024 00:08:33.527 } 00:08:33.527 }, 00:08:33.527 { 00:08:33.527 "method": "nvmf_set_crdt", 00:08:33.527 "params": { 00:08:33.527 "crdt1": 0, 00:08:33.527 "crdt2": 0, 00:08:33.527 "crdt3": 0 00:08:33.527 } 00:08:33.527 }, 00:08:33.527 { 00:08:33.527 "method": "nvmf_create_transport", 00:08:33.527 "params": { 00:08:33.527 "trtype": "TCP", 00:08:33.527 "max_queue_depth": 128, 00:08:33.527 "max_io_qpairs_per_ctrlr": 127, 00:08:33.527 "in_capsule_data_size": 4096, 00:08:33.527 "max_io_size": 131072, 00:08:33.527 "io_unit_size": 131072, 00:08:33.527 "max_aq_depth": 128, 00:08:33.527 "num_shared_buffers": 511, 00:08:33.527 "buf_cache_size": 4294967295, 00:08:33.527 "dif_insert_or_strip": false, 00:08:33.527 "zcopy": false, 00:08:33.527 "c2h_success": true, 00:08:33.527 "sock_priority": 0, 00:08:33.527 "abort_timeout_sec": 1, 00:08:33.527 "ack_timeout": 0, 00:08:33.527 "data_wr_pool_size": 0 00:08:33.527 } 00:08:33.527 } 00:08:33.527 ] 00:08:33.527 }, 00:08:33.527 { 00:08:33.527 "subsystem": "iscsi", 00:08:33.527 "config": [ 00:08:33.527 { 00:08:33.527 "method": "iscsi_set_options", 00:08:33.527 "params": { 00:08:33.527 "node_base": "iqn.2016-06.io.spdk", 00:08:33.527 "max_sessions": 128, 00:08:33.527 "max_connections_per_session": 2, 00:08:33.527 "max_queue_depth": 64, 00:08:33.527 "default_time2wait": 2, 00:08:33.527 "default_time2retain": 20, 00:08:33.527 "first_burst_length": 8192, 00:08:33.527 "immediate_data": true, 00:08:33.527 "allow_duplicated_isid": false, 00:08:33.527 "error_recovery_level": 0, 00:08:33.527 "nop_timeout": 60, 00:08:33.527 "nop_in_interval": 30, 00:08:33.527 "disable_chap": false, 00:08:33.527 "require_chap": false, 00:08:33.527 "mutual_chap": false, 00:08:33.527 "chap_group": 0, 00:08:33.527 "max_large_datain_per_connection": 64, 00:08:33.527 "max_r2t_per_connection": 4, 00:08:33.527 "pdu_pool_size": 36864, 00:08:33.527 "immediate_data_pool_size": 16384, 00:08:33.527 "data_out_pool_size": 2048 00:08:33.527 } 00:08:33.527 } 00:08:33.527 ] 00:08:33.527 } 00:08:33.527 ] 00:08:33.527 } 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 744186 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 744186 ']' 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 744186 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744186 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744186' 00:08:33.527 killing process with pid 744186 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 744186 00:08:33.527 22:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 744186 00:08:34.096 22:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=744424 00:08:34.096 22:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:34.096 22:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 744424 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 744424 ']' 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 744424 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744424 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744424' 00:08:39.375 killing process with pid 744424 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 744424 00:08:39.375 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 744424 00:08:39.634 22:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:39.634 22:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:39.634 00:08:39.634 real 0m7.239s 00:08:39.634 user 0m6.656s 00:08:39.634 sys 0m1.201s 00:08:39.634 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.634 22:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:39.634 ************************************ 00:08:39.634 END TEST skip_rpc_with_json 00:08:39.634 ************************************ 00:08:39.634 22:49:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:39.634 22:49:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:39.634 22:49:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.635 22:49:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.635 22:49:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.635 ************************************ 00:08:39.635 START TEST skip_rpc_with_delay 00:08:39.635 ************************************ 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:39.635 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:39.894 [2024-07-22 22:49:15.967174] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:39.894 [2024-07-22 22:49:15.967323] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:39.894 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:08:39.894 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:39.894 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:39.894 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:39.894 00:08:39.894 real 0m0.079s 00:08:39.894 user 0m0.048s 00:08:39.894 sys 0m0.030s 00:08:39.894 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.894 22:49:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:39.894 ************************************ 00:08:39.894 END TEST skip_rpc_with_delay 00:08:39.894 ************************************ 00:08:39.894 22:49:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:39.894 22:49:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:39.894 22:49:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:39.894 22:49:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:39.894 22:49:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.894 22:49:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.894 22:49:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.894 ************************************ 00:08:39.894 START TEST exit_on_failed_rpc_init 00:08:39.894 ************************************ 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=745136 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 745136 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 745136 ']' 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.894 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:39.894 [2024-07-22 22:49:16.173791] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:39.894 [2024-07-22 22:49:16.173974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745136 ] 00:08:40.155 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.155 [2024-07-22 22:49:16.316031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.423 [2024-07-22 22:49:16.473424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:40.713 22:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:40.713 [2024-07-22 22:49:16.989850] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:40.713 [2024-07-22 22:49:16.990026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745153 ] 00:08:40.986 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.986 [2024-07-22 22:49:17.101250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.986 [2024-07-22 22:49:17.211881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.986 [2024-07-22 22:49:17.212023] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:40.986 [2024-07-22 22:49:17.212051] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:40.986 [2024-07-22 22:49:17.212067] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 745136 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 745136 ']' 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 745136 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745136 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745136' 00:08:41.246 killing process with pid 745136 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 745136 00:08:41.246 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 745136 00:08:41.816 00:08:41.816 real 0m1.903s 00:08:41.816 user 0m2.184s 00:08:41.816 sys 0m0.805s 00:08:41.816 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.816 22:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:41.816 ************************************ 00:08:41.816 END TEST exit_on_failed_rpc_init 00:08:41.816 ************************************ 00:08:41.816 22:49:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:41.816 22:49:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:41.816 00:08:41.816 real 0m15.257s 00:08:41.816 user 0m14.137s 00:08:41.816 sys 0m2.849s 00:08:41.816 22:49:18 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.816 22:49:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.816 ************************************ 00:08:41.816 END TEST skip_rpc 00:08:41.816 ************************************ 00:08:41.816 22:49:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:41.816 22:49:18 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:41.816 22:49:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.816 22:49:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.816 22:49:18 -- common/autotest_common.sh@10 -- # set +x 00:08:41.816 ************************************ 00:08:41.816 START TEST rpc_client 00:08:41.816 ************************************ 00:08:41.816 22:49:18 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:42.077 * Looking for test storage... 00:08:42.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:42.078 22:49:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:42.078 OK 00:08:42.078 22:49:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:42.078 00:08:42.078 real 0m0.128s 00:08:42.078 user 0m0.059s 00:08:42.078 sys 0m0.079s 00:08:42.078 22:49:18 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.078 22:49:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:42.078 ************************************ 00:08:42.078 END TEST rpc_client 00:08:42.078 ************************************ 00:08:42.078 22:49:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:42.078 22:49:18 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:42.078 22:49:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.078 22:49:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.078 22:49:18 -- common/autotest_common.sh@10 -- # set +x 00:08:42.078 ************************************ 00:08:42.078 START TEST json_config 00:08:42.078 ************************************ 00:08:42.078 22:49:18 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:42.078 22:49:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.078 22:49:18 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.339 22:49:18 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.339 22:49:18 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.339 22:49:18 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.339 22:49:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.339 22:49:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.339 22:49:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.339 22:49:18 json_config -- paths/export.sh@5 -- # export PATH 00:08:42.339 22:49:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@47 -- # : 0 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.339 22:49:18 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:08:42.339 INFO: JSON configuration test init 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:08:42.339 22:49:18 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.340 22:49:18 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.340 22:49:18 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:08:42.340 22:49:18 json_config -- json_config/common.sh@9 -- # local app=target 00:08:42.340 22:49:18 json_config -- json_config/common.sh@10 -- # shift 00:08:42.340 22:49:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:42.340 22:49:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:42.340 22:49:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:42.340 22:49:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.340 22:49:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:42.340 22:49:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=745524 00:08:42.340 22:49:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:42.340 22:49:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:42.340 Waiting for target to run... 00:08:42.340 22:49:18 json_config -- json_config/common.sh@25 -- # waitforlisten 745524 /var/tmp/spdk_tgt.sock 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@829 -- # '[' -z 745524 ']' 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:42.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.340 22:49:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.340 [2024-07-22 22:49:18.481578] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:42.340 [2024-07-22 22:49:18.481700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745524 ] 00:08:42.340 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.910 [2024-07-22 22:49:19.190956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.171 [2024-07-22 22:49:19.321641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.110 22:49:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.110 22:49:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:08:44.110 22:49:20 json_config -- json_config/common.sh@26 -- # echo '' 00:08:44.110 00:08:44.110 22:49:20 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:08:44.110 22:49:20 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:08:44.110 22:49:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.110 22:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:44.110 22:49:20 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:08:44.110 22:49:20 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:08:44.110 22:49:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.110 22:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:44.110 22:49:20 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:44.110 22:49:20 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:08:44.110 22:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:47.405 22:49:23 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:08:47.405 22:49:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:47.405 22:49:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.405 22:49:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.405 22:49:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:47.405 22:49:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:47.405 22:49:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:47.405 22:49:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:08:47.405 22:49:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:47.405 22:49:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@51 -- # sort 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:08:47.665 22:49:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.665 22:49:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@59 -- # return 0 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:08:47.665 22:49:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.665 22:49:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:08:47.665 22:49:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:47.665 22:49:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:48.234 MallocForNvmf0 00:08:48.234 22:49:24 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:48.234 22:49:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:48.803 MallocForNvmf1 00:08:48.803 22:49:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:48.803 22:49:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:49.373 [2024-07-22 22:49:25.588019] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.373 22:49:25 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.373 22:49:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.942 22:49:26 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:49.943 22:49:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:50.881 22:49:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:50.881 22:49:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:51.451 22:49:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:51.451 22:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:52.020 [2024-07-22 22:49:28.109153] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:52.020 22:49:28 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:08:52.020 22:49:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.020 22:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.020 22:49:28 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:08:52.020 22:49:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.020 22:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.020 22:49:28 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:08:52.020 22:49:28 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:52.020 22:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:52.590 MallocBdevForConfigChangeCheck 00:08:52.590 22:49:28 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:08:52.590 22:49:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.590 22:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.590 22:49:28 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:08:52.590 22:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:52.850 22:49:29 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:08:52.850 INFO: shutting down applications... 00:08:52.850 22:49:29 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:08:52.850 22:49:29 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:08:52.850 22:49:29 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:08:52.850 22:49:29 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:54.759 Calling clear_iscsi_subsystem 00:08:54.759 Calling clear_nvmf_subsystem 00:08:54.759 Calling clear_nbd_subsystem 00:08:54.759 Calling clear_ublk_subsystem 00:08:54.759 Calling clear_vhost_blk_subsystem 00:08:54.759 Calling clear_vhost_scsi_subsystem 00:08:54.759 Calling clear_bdev_subsystem 00:08:54.759 22:49:30 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:54.759 22:49:30 json_config -- json_config/json_config.sh@347 -- # count=100 00:08:54.759 22:49:30 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:08:54.759 22:49:30 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:54.759 22:49:30 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:54.759 22:49:30 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:55.696 22:49:31 json_config -- json_config/json_config.sh@349 -- # break 00:08:55.696 22:49:31 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:08:55.696 22:49:31 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:08:55.696 22:49:31 json_config -- json_config/common.sh@31 -- # local app=target 00:08:55.696 22:49:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:55.696 22:49:31 json_config -- json_config/common.sh@35 -- # [[ -n 745524 ]] 00:08:55.696 22:49:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 745524 00:08:55.696 22:49:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:55.696 22:49:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:55.696 22:49:31 json_config -- json_config/common.sh@41 -- # kill -0 745524 00:08:55.696 22:49:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:55.956 22:49:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:55.956 22:49:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:55.956 22:49:32 json_config -- json_config/common.sh@41 -- # kill -0 745524 00:08:55.956 22:49:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:55.956 22:49:32 json_config -- json_config/common.sh@43 -- # break 00:08:55.956 22:49:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:55.956 22:49:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:55.956 SPDK target shutdown done 00:08:55.956 22:49:32 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:08:55.956 INFO: relaunching applications... 00:08:55.956 22:49:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:55.956 22:49:32 json_config -- json_config/common.sh@9 -- # local app=target 00:08:55.956 22:49:32 json_config -- json_config/common.sh@10 -- # shift 00:08:55.956 22:49:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:55.956 22:49:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:55.956 22:49:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:55.956 22:49:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:55.956 22:49:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:55.956 22:49:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=747113 00:08:55.956 22:49:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:55.956 22:49:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:55.956 Waiting for target to run... 00:08:55.956 22:49:32 json_config -- json_config/common.sh@25 -- # waitforlisten 747113 /var/tmp/spdk_tgt.sock 00:08:55.956 22:49:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 747113 ']' 00:08:55.956 22:49:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:55.956 22:49:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.956 22:49:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:55.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:55.956 22:49:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.956 22:49:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 [2024-07-22 22:49:32.308177] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:56.216 [2024-07-22 22:49:32.308395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747113 ] 00:08:56.216 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.157 [2024-07-22 22:49:33.102416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.157 [2024-07-22 22:49:33.239531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.451 [2024-07-22 22:49:36.345159] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.451 [2024-07-22 22:49:36.377895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:00.451 22:49:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.451 22:49:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:00.451 22:49:36 json_config -- json_config/common.sh@26 -- # echo '' 00:09:00.451 00:09:00.451 22:49:36 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:09:00.451 22:49:36 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:00.451 INFO: Checking if target configuration is the same... 00:09:00.451 22:49:36 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:00.451 22:49:36 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:09:00.451 22:49:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:00.451 + '[' 2 -ne 2 ']' 00:09:00.451 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:00.451 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:00.451 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:00.451 +++ basename /dev/fd/62 00:09:00.451 ++ mktemp /tmp/62.XXX 00:09:00.451 + tmp_file_1=/tmp/62.gnf 00:09:00.451 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:00.451 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:00.451 + tmp_file_2=/tmp/spdk_tgt_config.json.CSL 00:09:00.451 + ret=0 00:09:00.451 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:01.019 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:01.019 + diff -u /tmp/62.gnf /tmp/spdk_tgt_config.json.CSL 00:09:01.019 + echo 'INFO: JSON config files are the same' 00:09:01.019 INFO: JSON config files are the same 00:09:01.019 + rm /tmp/62.gnf /tmp/spdk_tgt_config.json.CSL 00:09:01.019 + exit 0 00:09:01.019 22:49:37 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:09:01.019 22:49:37 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:01.019 INFO: changing configuration and checking if this can be detected... 00:09:01.019 22:49:37 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:01.020 22:49:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:01.603 22:49:37 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:01.603 22:49:37 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:09:01.603 22:49:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:01.603 + '[' 2 -ne 2 ']' 00:09:01.603 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:01.603 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:01.603 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:01.603 +++ basename /dev/fd/62 00:09:01.603 ++ mktemp /tmp/62.XXX 00:09:01.603 + tmp_file_1=/tmp/62.BPC 00:09:01.603 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:01.603 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:01.603 + tmp_file_2=/tmp/spdk_tgt_config.json.UvL 00:09:01.603 + ret=0 00:09:01.603 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:02.222 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:02.222 + diff -u /tmp/62.BPC /tmp/spdk_tgt_config.json.UvL 00:09:02.222 + ret=1 00:09:02.222 + echo '=== Start of file: /tmp/62.BPC ===' 00:09:02.222 + cat /tmp/62.BPC 00:09:02.222 + echo '=== End of file: /tmp/62.BPC ===' 00:09:02.222 + echo '' 00:09:02.222 + echo '=== Start of file: /tmp/spdk_tgt_config.json.UvL ===' 00:09:02.222 + cat /tmp/spdk_tgt_config.json.UvL 00:09:02.222 + echo '=== End of file: /tmp/spdk_tgt_config.json.UvL ===' 00:09:02.222 + echo '' 00:09:02.222 + rm /tmp/62.BPC /tmp/spdk_tgt_config.json.UvL 00:09:02.222 + exit 1 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:09:02.222 INFO: configuration change detected. 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:09:02.222 22:49:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.222 22:49:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@321 -- # [[ -n 747113 ]] 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:09:02.222 22:49:38 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:09:02.222 22:49:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.223 22:49:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:02.223 22:49:38 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:09:02.223 22:49:38 json_config -- json_config/json_config.sh@197 -- # uname -s 00:09:02.223 22:49:38 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:09:02.223 22:49:38 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:09:02.223 22:49:38 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:09:02.223 22:49:38 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:09:02.223 22:49:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.223 22:49:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:02.482 22:49:38 json_config -- json_config/json_config.sh@327 -- # killprocess 747113 00:09:02.482 22:49:38 json_config -- common/autotest_common.sh@948 -- # '[' -z 747113 ']' 00:09:02.482 22:49:38 json_config -- common/autotest_common.sh@952 -- # kill -0 747113 00:09:02.482 22:49:38 json_config -- common/autotest_common.sh@953 -- # uname 00:09:02.482 22:49:38 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.482 22:49:38 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 747113 00:09:02.482 22:49:38 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.482 22:49:38 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.483 22:49:38 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 747113' 00:09:02.483 killing process with pid 747113 00:09:02.483 22:49:38 json_config -- common/autotest_common.sh@967 -- # kill 747113 00:09:02.483 22:49:38 json_config -- common/autotest_common.sh@972 -- # wait 747113 00:09:04.386 22:49:40 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:04.386 22:49:40 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:09:04.386 22:49:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.386 22:49:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 22:49:40 json_config -- json_config/json_config.sh@332 -- # return 0 00:09:04.386 22:49:40 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:09:04.386 INFO: Success 00:09:04.386 00:09:04.386 real 0m22.032s 00:09:04.386 user 0m28.659s 00:09:04.386 sys 0m3.812s 00:09:04.386 22:49:40 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.386 22:49:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 ************************************ 00:09:04.386 END TEST json_config 00:09:04.386 ************************************ 00:09:04.386 22:49:40 -- common/autotest_common.sh@1142 -- # return 0 00:09:04.386 22:49:40 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:04.386 22:49:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.386 22:49:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.386 22:49:40 -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 ************************************ 00:09:04.386 START TEST json_config_extra_key 00:09:04.386 ************************************ 00:09:04.386 22:49:40 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.386 22:49:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.386 22:49:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.386 22:49:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.386 22:49:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.386 22:49:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.386 22:49:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.386 22:49:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:04.386 22:49:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.386 22:49:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:04.386 INFO: launching applications... 00:09:04.386 22:49:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=748157 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:04.386 Waiting for target to run... 00:09:04.386 22:49:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 748157 /var/tmp/spdk_tgt.sock 00:09:04.386 22:49:40 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 748157 ']' 00:09:04.386 22:49:40 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:04.386 22:49:40 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.386 22:49:40 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:04.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:04.386 22:49:40 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.386 22:49:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 [2024-07-22 22:49:40.571596] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:04.387 [2024-07-22 22:49:40.571711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748157 ] 00:09:04.387 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.956 [2024-07-22 22:49:41.117728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.956 [2024-07-22 22:49:41.226141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.895 22:49:42 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.895 22:49:42 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:05.895 00:09:05.895 22:49:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:05.895 INFO: shutting down applications... 00:09:05.895 22:49:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 748157 ]] 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 748157 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 748157 00:09:05.895 22:49:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:06.463 22:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:06.463 22:49:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:06.463 22:49:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 748157 00:09:06.463 22:49:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:07.033 22:49:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:07.033 22:49:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:07.033 22:49:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 748157 00:09:07.033 22:49:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:07.033 22:49:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:07.033 22:49:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:07.033 22:49:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:07.033 SPDK target shutdown done 00:09:07.033 22:49:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:07.033 Success 00:09:07.033 00:09:07.033 real 0m2.779s 00:09:07.033 user 0m2.785s 00:09:07.033 sys 0m0.727s 00:09:07.033 22:49:43 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.033 22:49:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:07.033 ************************************ 00:09:07.033 END TEST json_config_extra_key 00:09:07.033 ************************************ 00:09:07.033 22:49:43 -- common/autotest_common.sh@1142 -- # return 0 00:09:07.033 22:49:43 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:07.033 22:49:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:07.033 22:49:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.033 22:49:43 -- common/autotest_common.sh@10 -- # set +x 00:09:07.033 ************************************ 00:09:07.033 START TEST alias_rpc 00:09:07.033 ************************************ 00:09:07.033 22:49:43 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:07.294 * Looking for test storage... 00:09:07.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:07.294 22:49:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:07.294 22:49:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=748484 00:09:07.294 22:49:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:07.294 22:49:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 748484 00:09:07.294 22:49:43 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 748484 ']' 00:09:07.294 22:49:43 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.294 22:49:43 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.294 22:49:43 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.294 22:49:43 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.294 22:49:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.294 [2024-07-22 22:49:43.462694] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:07.294 [2024-07-22 22:49:43.462871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748484 ] 00:09:07.294 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.294 [2024-07-22 22:49:43.598448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.554 [2024-07-22 22:49:43.755813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.125 22:49:44 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.125 22:49:44 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:08.125 22:49:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:08.693 22:49:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 748484 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 748484 ']' 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 748484 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748484 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748484' 00:09:08.693 killing process with pid 748484 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@967 -- # kill 748484 00:09:08.693 22:49:44 alias_rpc -- common/autotest_common.sh@972 -- # wait 748484 00:09:09.261 00:09:09.261 real 0m2.129s 00:09:09.261 user 0m2.550s 00:09:09.261 sys 0m0.785s 00:09:09.261 22:49:45 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.261 22:49:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.261 ************************************ 00:09:09.261 END TEST alias_rpc 00:09:09.261 ************************************ 00:09:09.261 22:49:45 -- common/autotest_common.sh@1142 -- # return 0 00:09:09.261 22:49:45 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:09.261 22:49:45 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:09.261 22:49:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:09.261 22:49:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.261 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:09:09.261 ************************************ 00:09:09.261 START TEST spdkcli_tcp 00:09:09.261 ************************************ 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:09.261 * Looking for test storage... 00:09:09.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=748798 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:09.261 22:49:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 748798 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 748798 ']' 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.261 22:49:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.521 [2024-07-22 22:49:45.662818] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:09.521 [2024-07-22 22:49:45.662996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748798 ] 00:09:09.521 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.521 [2024-07-22 22:49:45.801304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.780 [2024-07-22 22:49:45.956486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.780 [2024-07-22 22:49:45.956493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.038 22:49:46 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.038 22:49:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:09:10.038 22:49:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=748933 00:09:10.038 22:49:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:10.038 22:49:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:10.297 [ 00:09:10.297 "bdev_malloc_delete", 00:09:10.297 "bdev_malloc_create", 00:09:10.297 "bdev_null_resize", 00:09:10.297 "bdev_null_delete", 00:09:10.297 "bdev_null_create", 00:09:10.297 "bdev_nvme_cuse_unregister", 00:09:10.297 "bdev_nvme_cuse_register", 00:09:10.297 "bdev_opal_new_user", 00:09:10.297 "bdev_opal_set_lock_state", 00:09:10.297 "bdev_opal_delete", 00:09:10.297 "bdev_opal_get_info", 00:09:10.297 "bdev_opal_create", 00:09:10.297 "bdev_nvme_opal_revert", 00:09:10.297 "bdev_nvme_opal_init", 00:09:10.297 "bdev_nvme_send_cmd", 00:09:10.297 "bdev_nvme_get_path_iostat", 00:09:10.297 "bdev_nvme_get_mdns_discovery_info", 00:09:10.297 "bdev_nvme_stop_mdns_discovery", 00:09:10.297 "bdev_nvme_start_mdns_discovery", 00:09:10.297 "bdev_nvme_set_multipath_policy", 00:09:10.297 "bdev_nvme_set_preferred_path", 00:09:10.297 "bdev_nvme_get_io_paths", 00:09:10.297 "bdev_nvme_remove_error_injection", 00:09:10.297 "bdev_nvme_add_error_injection", 00:09:10.297 "bdev_nvme_get_discovery_info", 00:09:10.298 "bdev_nvme_stop_discovery", 00:09:10.298 "bdev_nvme_start_discovery", 00:09:10.298 "bdev_nvme_get_controller_health_info", 00:09:10.298 "bdev_nvme_disable_controller", 00:09:10.298 "bdev_nvme_enable_controller", 00:09:10.298 "bdev_nvme_reset_controller", 00:09:10.298 "bdev_nvme_get_transport_statistics", 00:09:10.298 "bdev_nvme_apply_firmware", 00:09:10.298 "bdev_nvme_detach_controller", 00:09:10.298 "bdev_nvme_get_controllers", 00:09:10.298 "bdev_nvme_attach_controller", 00:09:10.298 "bdev_nvme_set_hotplug", 00:09:10.298 "bdev_nvme_set_options", 00:09:10.298 "bdev_passthru_delete", 00:09:10.298 "bdev_passthru_create", 00:09:10.298 "bdev_lvol_set_parent_bdev", 00:09:10.298 "bdev_lvol_set_parent", 00:09:10.298 "bdev_lvol_check_shallow_copy", 00:09:10.298 "bdev_lvol_start_shallow_copy", 00:09:10.298 "bdev_lvol_grow_lvstore", 00:09:10.298 "bdev_lvol_get_lvols", 00:09:10.298 "bdev_lvol_get_lvstores", 00:09:10.298 "bdev_lvol_delete", 00:09:10.298 "bdev_lvol_set_read_only", 00:09:10.298 "bdev_lvol_resize", 00:09:10.298 "bdev_lvol_decouple_parent", 00:09:10.298 "bdev_lvol_inflate", 00:09:10.298 "bdev_lvol_rename", 00:09:10.298 "bdev_lvol_clone_bdev", 00:09:10.298 "bdev_lvol_clone", 00:09:10.298 "bdev_lvol_snapshot", 00:09:10.298 "bdev_lvol_create", 00:09:10.298 "bdev_lvol_delete_lvstore", 00:09:10.298 "bdev_lvol_rename_lvstore", 00:09:10.298 "bdev_lvol_create_lvstore", 00:09:10.298 "bdev_raid_set_options", 00:09:10.298 "bdev_raid_remove_base_bdev", 00:09:10.298 "bdev_raid_add_base_bdev", 00:09:10.298 "bdev_raid_delete", 00:09:10.298 "bdev_raid_create", 00:09:10.298 "bdev_raid_get_bdevs", 00:09:10.298 "bdev_error_inject_error", 00:09:10.298 "bdev_error_delete", 00:09:10.298 "bdev_error_create", 00:09:10.298 "bdev_split_delete", 00:09:10.298 "bdev_split_create", 00:09:10.298 "bdev_delay_delete", 00:09:10.298 "bdev_delay_create", 00:09:10.298 "bdev_delay_update_latency", 00:09:10.298 "bdev_zone_block_delete", 00:09:10.298 "bdev_zone_block_create", 00:09:10.298 "blobfs_create", 00:09:10.298 "blobfs_detect", 00:09:10.298 "blobfs_set_cache_size", 00:09:10.298 "bdev_aio_delete", 00:09:10.298 "bdev_aio_rescan", 00:09:10.298 "bdev_aio_create", 00:09:10.298 "bdev_ftl_set_property", 00:09:10.298 "bdev_ftl_get_properties", 00:09:10.298 "bdev_ftl_get_stats", 00:09:10.298 "bdev_ftl_unmap", 00:09:10.298 "bdev_ftl_unload", 00:09:10.298 "bdev_ftl_delete", 00:09:10.298 "bdev_ftl_load", 00:09:10.298 "bdev_ftl_create", 00:09:10.298 "bdev_virtio_attach_controller", 00:09:10.298 "bdev_virtio_scsi_get_devices", 00:09:10.298 "bdev_virtio_detach_controller", 00:09:10.298 "bdev_virtio_blk_set_hotplug", 00:09:10.298 "bdev_iscsi_delete", 00:09:10.298 "bdev_iscsi_create", 00:09:10.298 "bdev_iscsi_set_options", 00:09:10.298 "accel_error_inject_error", 00:09:10.298 "ioat_scan_accel_module", 00:09:10.298 "dsa_scan_accel_module", 00:09:10.298 "iaa_scan_accel_module", 00:09:10.298 "vfu_virtio_create_scsi_endpoint", 00:09:10.298 "vfu_virtio_scsi_remove_target", 00:09:10.298 "vfu_virtio_scsi_add_target", 00:09:10.298 "vfu_virtio_create_blk_endpoint", 00:09:10.298 "vfu_virtio_delete_endpoint", 00:09:10.298 "keyring_file_remove_key", 00:09:10.298 "keyring_file_add_key", 00:09:10.298 "keyring_linux_set_options", 00:09:10.298 "iscsi_get_histogram", 00:09:10.298 "iscsi_enable_histogram", 00:09:10.298 "iscsi_set_options", 00:09:10.298 "iscsi_get_auth_groups", 00:09:10.298 "iscsi_auth_group_remove_secret", 00:09:10.298 "iscsi_auth_group_add_secret", 00:09:10.298 "iscsi_delete_auth_group", 00:09:10.298 "iscsi_create_auth_group", 00:09:10.298 "iscsi_set_discovery_auth", 00:09:10.298 "iscsi_get_options", 00:09:10.298 "iscsi_target_node_request_logout", 00:09:10.298 "iscsi_target_node_set_redirect", 00:09:10.298 "iscsi_target_node_set_auth", 00:09:10.298 "iscsi_target_node_add_lun", 00:09:10.298 "iscsi_get_stats", 00:09:10.298 "iscsi_get_connections", 00:09:10.298 "iscsi_portal_group_set_auth", 00:09:10.298 "iscsi_start_portal_group", 00:09:10.298 "iscsi_delete_portal_group", 00:09:10.298 "iscsi_create_portal_group", 00:09:10.298 "iscsi_get_portal_groups", 00:09:10.298 "iscsi_delete_target_node", 00:09:10.298 "iscsi_target_node_remove_pg_ig_maps", 00:09:10.298 "iscsi_target_node_add_pg_ig_maps", 00:09:10.298 "iscsi_create_target_node", 00:09:10.298 "iscsi_get_target_nodes", 00:09:10.298 "iscsi_delete_initiator_group", 00:09:10.298 "iscsi_initiator_group_remove_initiators", 00:09:10.298 "iscsi_initiator_group_add_initiators", 00:09:10.298 "iscsi_create_initiator_group", 00:09:10.298 "iscsi_get_initiator_groups", 00:09:10.298 "nvmf_set_crdt", 00:09:10.298 "nvmf_set_config", 00:09:10.298 "nvmf_set_max_subsystems", 00:09:10.298 "nvmf_stop_mdns_prr", 00:09:10.298 "nvmf_publish_mdns_prr", 00:09:10.298 "nvmf_subsystem_get_listeners", 00:09:10.298 "nvmf_subsystem_get_qpairs", 00:09:10.298 "nvmf_subsystem_get_controllers", 00:09:10.298 "nvmf_get_stats", 00:09:10.298 "nvmf_get_transports", 00:09:10.298 "nvmf_create_transport", 00:09:10.298 "nvmf_get_targets", 00:09:10.298 "nvmf_delete_target", 00:09:10.298 "nvmf_create_target", 00:09:10.298 "nvmf_subsystem_allow_any_host", 00:09:10.298 "nvmf_subsystem_remove_host", 00:09:10.298 "nvmf_subsystem_add_host", 00:09:10.298 "nvmf_ns_remove_host", 00:09:10.298 "nvmf_ns_add_host", 00:09:10.298 "nvmf_subsystem_remove_ns", 00:09:10.298 "nvmf_subsystem_add_ns", 00:09:10.298 "nvmf_subsystem_listener_set_ana_state", 00:09:10.298 "nvmf_discovery_get_referrals", 00:09:10.298 "nvmf_discovery_remove_referral", 00:09:10.298 "nvmf_discovery_add_referral", 00:09:10.298 "nvmf_subsystem_remove_listener", 00:09:10.298 "nvmf_subsystem_add_listener", 00:09:10.298 "nvmf_delete_subsystem", 00:09:10.298 "nvmf_create_subsystem", 00:09:10.298 "nvmf_get_subsystems", 00:09:10.298 "env_dpdk_get_mem_stats", 00:09:10.298 "nbd_get_disks", 00:09:10.298 "nbd_stop_disk", 00:09:10.298 "nbd_start_disk", 00:09:10.298 "ublk_recover_disk", 00:09:10.298 "ublk_get_disks", 00:09:10.298 "ublk_stop_disk", 00:09:10.298 "ublk_start_disk", 00:09:10.298 "ublk_destroy_target", 00:09:10.298 "ublk_create_target", 00:09:10.298 "virtio_blk_create_transport", 00:09:10.298 "virtio_blk_get_transports", 00:09:10.298 "vhost_controller_set_coalescing", 00:09:10.298 "vhost_get_controllers", 00:09:10.298 "vhost_delete_controller", 00:09:10.298 "vhost_create_blk_controller", 00:09:10.298 "vhost_scsi_controller_remove_target", 00:09:10.298 "vhost_scsi_controller_add_target", 00:09:10.298 "vhost_start_scsi_controller", 00:09:10.298 "vhost_create_scsi_controller", 00:09:10.298 "thread_set_cpumask", 00:09:10.298 "framework_get_governor", 00:09:10.298 "framework_get_scheduler", 00:09:10.298 "framework_set_scheduler", 00:09:10.298 "framework_get_reactors", 00:09:10.298 "thread_get_io_channels", 00:09:10.298 "thread_get_pollers", 00:09:10.298 "thread_get_stats", 00:09:10.298 "framework_monitor_context_switch", 00:09:10.298 "spdk_kill_instance", 00:09:10.298 "log_enable_timestamps", 00:09:10.298 "log_get_flags", 00:09:10.298 "log_clear_flag", 00:09:10.298 "log_set_flag", 00:09:10.298 "log_get_level", 00:09:10.298 "log_set_level", 00:09:10.298 "log_get_print_level", 00:09:10.298 "log_set_print_level", 00:09:10.298 "framework_enable_cpumask_locks", 00:09:10.298 "framework_disable_cpumask_locks", 00:09:10.298 "framework_wait_init", 00:09:10.298 "framework_start_init", 00:09:10.298 "scsi_get_devices", 00:09:10.299 "bdev_get_histogram", 00:09:10.299 "bdev_enable_histogram", 00:09:10.299 "bdev_set_qos_limit", 00:09:10.299 "bdev_set_qd_sampling_period", 00:09:10.299 "bdev_get_bdevs", 00:09:10.299 "bdev_reset_iostat", 00:09:10.299 "bdev_get_iostat", 00:09:10.299 "bdev_examine", 00:09:10.299 "bdev_wait_for_examine", 00:09:10.299 "bdev_set_options", 00:09:10.299 "notify_get_notifications", 00:09:10.299 "notify_get_types", 00:09:10.299 "accel_get_stats", 00:09:10.299 "accel_set_options", 00:09:10.299 "accel_set_driver", 00:09:10.299 "accel_crypto_key_destroy", 00:09:10.299 "accel_crypto_keys_get", 00:09:10.299 "accel_crypto_key_create", 00:09:10.299 "accel_assign_opc", 00:09:10.299 "accel_get_module_info", 00:09:10.299 "accel_get_opc_assignments", 00:09:10.299 "vmd_rescan", 00:09:10.299 "vmd_remove_device", 00:09:10.299 "vmd_enable", 00:09:10.299 "sock_get_default_impl", 00:09:10.299 "sock_set_default_impl", 00:09:10.299 "sock_impl_set_options", 00:09:10.299 "sock_impl_get_options", 00:09:10.299 "iobuf_get_stats", 00:09:10.299 "iobuf_set_options", 00:09:10.299 "keyring_get_keys", 00:09:10.299 "framework_get_pci_devices", 00:09:10.299 "framework_get_config", 00:09:10.299 "framework_get_subsystems", 00:09:10.299 "vfu_tgt_set_base_path", 00:09:10.299 "trace_get_info", 00:09:10.299 "trace_get_tpoint_group_mask", 00:09:10.299 "trace_disable_tpoint_group", 00:09:10.299 "trace_enable_tpoint_group", 00:09:10.299 "trace_clear_tpoint_mask", 00:09:10.299 "trace_set_tpoint_mask", 00:09:10.299 "spdk_get_version", 00:09:10.299 "rpc_get_methods" 00:09:10.299 ] 00:09:10.299 22:49:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:10.299 22:49:46 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.299 22:49:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.299 22:49:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:10.299 22:49:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 748798 00:09:10.299 22:49:46 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 748798 ']' 00:09:10.299 22:49:46 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 748798 00:09:10.299 22:49:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:09:10.299 22:49:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:10.299 22:49:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748798 00:09:10.558 22:49:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:10.558 22:49:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:10.558 22:49:46 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748798' 00:09:10.558 killing process with pid 748798 00:09:10.558 22:49:46 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 748798 00:09:10.558 22:49:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 748798 00:09:11.126 00:09:11.126 real 0m1.706s 00:09:11.126 user 0m2.947s 00:09:11.126 sys 0m0.695s 00:09:11.126 22:49:47 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.126 22:49:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.127 ************************************ 00:09:11.127 END TEST spdkcli_tcp 00:09:11.127 ************************************ 00:09:11.127 22:49:47 -- common/autotest_common.sh@1142 -- # return 0 00:09:11.127 22:49:47 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:11.127 22:49:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:11.127 22:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.127 22:49:47 -- common/autotest_common.sh@10 -- # set +x 00:09:11.127 ************************************ 00:09:11.127 START TEST dpdk_mem_utility 00:09:11.127 ************************************ 00:09:11.127 22:49:47 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:11.127 * Looking for test storage... 00:09:11.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:11.127 22:49:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:11.127 22:49:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=749050 00:09:11.127 22:49:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:11.127 22:49:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 749050 00:09:11.127 22:49:47 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 749050 ']' 00:09:11.127 22:49:47 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.127 22:49:47 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.127 22:49:47 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.127 22:49:47 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.127 22:49:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.127 [2024-07-22 22:49:47.409696] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:11.127 [2024-07-22 22:49:47.409823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749050 ] 00:09:11.387 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.387 [2024-07-22 22:49:47.513838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.387 [2024-07-22 22:49:47.663436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.958 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.958 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:09:11.958 22:49:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:11.958 22:49:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:11.959 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.959 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.959 { 00:09:11.959 "filename": "/tmp/spdk_mem_dump.txt" 00:09:11.959 } 00:09:11.959 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.959 22:49:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:11.959 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:11.959 1 heaps totaling size 814.000000 MiB 00:09:11.959 size: 814.000000 MiB heap id: 0 00:09:11.959 end heaps---------- 00:09:11.959 8 mempools totaling size 598.116089 MiB 00:09:11.959 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:11.959 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:11.959 size: 84.521057 MiB name: bdev_io_749050 00:09:11.959 size: 51.011292 MiB name: evtpool_749050 00:09:11.959 size: 50.003479 MiB name: msgpool_749050 00:09:11.959 size: 21.763794 MiB name: PDU_Pool 00:09:11.959 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:11.959 size: 0.026123 MiB name: Session_Pool 00:09:11.959 end mempools------- 00:09:11.959 6 memzones totaling size 4.142822 MiB 00:09:11.959 size: 1.000366 MiB name: RG_ring_0_749050 00:09:11.959 size: 1.000366 MiB name: RG_ring_1_749050 00:09:11.959 size: 1.000366 MiB name: RG_ring_4_749050 00:09:11.959 size: 1.000366 MiB name: RG_ring_5_749050 00:09:11.959 size: 0.125366 MiB name: RG_ring_2_749050 00:09:11.959 size: 0.015991 MiB name: RG_ring_3_749050 00:09:11.959 end memzones------- 00:09:11.959 22:49:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:12.219 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:09:12.219 list of free elements. size: 12.519348 MiB 00:09:12.219 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:12.219 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:12.219 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:12.219 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:12.219 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:12.219 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:12.219 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:12.219 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:12.219 element at address: 0x200000200000 with size: 0.841614 MiB 00:09:12.219 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:09:12.219 element at address: 0x20000b200000 with size: 0.490723 MiB 00:09:12.219 element at address: 0x200000800000 with size: 0.487793 MiB 00:09:12.219 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:12.219 element at address: 0x200027e00000 with size: 0.410034 MiB 00:09:12.219 element at address: 0x200003a00000 with size: 0.355530 MiB 00:09:12.219 list of standard malloc elements. size: 199.218079 MiB 00:09:12.219 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:12.219 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:12.219 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:12.219 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:12.219 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:12.219 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:12.219 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:12.219 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:12.219 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:12.219 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:12.219 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:12.219 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:12.219 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:12.219 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:12.219 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:12.219 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:12.219 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:12.219 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200027e69040 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:12.219 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:12.219 list of memzone associated elements. size: 602.262573 MiB 00:09:12.219 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:12.219 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:12.219 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:12.219 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:12.219 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:12.219 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_749050_0 00:09:12.219 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:12.219 associated memzone info: size: 48.002930 MiB name: MP_evtpool_749050_0 00:09:12.219 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:12.219 associated memzone info: size: 48.002930 MiB name: MP_msgpool_749050_0 00:09:12.219 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:12.219 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:12.219 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:12.219 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:12.219 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:12.219 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_749050 00:09:12.219 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:12.219 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_749050 00:09:12.219 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:12.219 associated memzone info: size: 1.007996 MiB name: MP_evtpool_749050 00:09:12.219 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:12.219 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:12.219 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:12.219 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:12.219 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:12.219 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:12.219 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:12.219 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:12.219 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:12.219 associated memzone info: size: 1.000366 MiB name: RG_ring_0_749050 00:09:12.219 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:12.219 associated memzone info: size: 1.000366 MiB name: RG_ring_1_749050 00:09:12.220 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:12.220 associated memzone info: size: 1.000366 MiB name: RG_ring_4_749050 00:09:12.220 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:12.220 associated memzone info: size: 1.000366 MiB name: RG_ring_5_749050 00:09:12.220 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:12.220 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_749050 00:09:12.220 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:12.220 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:12.220 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:12.220 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:12.220 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:12.220 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:12.220 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:12.220 associated memzone info: size: 0.125366 MiB name: RG_ring_2_749050 00:09:12.220 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:12.220 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:12.220 element at address: 0x200027e69100 with size: 0.023743 MiB 00:09:12.220 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:12.220 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:12.220 associated memzone info: size: 0.015991 MiB name: RG_ring_3_749050 00:09:12.220 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:09:12.220 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:12.220 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:09:12.220 associated memzone info: size: 0.000183 MiB name: MP_msgpool_749050 00:09:12.220 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:12.220 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_749050 00:09:12.220 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:09:12.220 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:12.220 22:49:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:12.220 22:49:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 749050 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 749050 ']' 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 749050 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749050 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749050' 00:09:12.220 killing process with pid 749050 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 749050 00:09:12.220 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 749050 00:09:12.791 00:09:12.791 real 0m1.684s 00:09:12.791 user 0m1.798s 00:09:12.791 sys 0m0.692s 00:09:12.791 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.791 22:49:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:12.791 ************************************ 00:09:12.791 END TEST dpdk_mem_utility 00:09:12.791 ************************************ 00:09:12.791 22:49:48 -- common/autotest_common.sh@1142 -- # return 0 00:09:12.791 22:49:48 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:12.791 22:49:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:12.791 22:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.791 22:49:48 -- common/autotest_common.sh@10 -- # set +x 00:09:12.791 ************************************ 00:09:12.791 START TEST event 00:09:12.791 ************************************ 00:09:12.791 22:49:49 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:13.052 * Looking for test storage... 00:09:13.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:13.052 22:49:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:13.052 22:49:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:13.052 22:49:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:13.052 22:49:49 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:13.052 22:49:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.052 22:49:49 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.052 ************************************ 00:09:13.052 START TEST event_perf 00:09:13.052 ************************************ 00:09:13.052 22:49:49 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:13.052 Running I/O for 1 seconds...[2024-07-22 22:49:49.188227] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:13.052 [2024-07-22 22:49:49.188388] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749325 ] 00:09:13.052 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.052 [2024-07-22 22:49:49.318471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.312 [2024-07-22 22:49:49.476515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.312 [2024-07-22 22:49:49.476560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.312 [2024-07-22 22:49:49.476615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.312 [2024-07-22 22:49:49.476619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.688 Running I/O for 1 seconds... 00:09:14.688 lcore 0: 161901 00:09:14.688 lcore 1: 161901 00:09:14.688 lcore 2: 161900 00:09:14.688 lcore 3: 161901 00:09:14.688 done. 00:09:14.688 00:09:14.688 real 0m1.432s 00:09:14.688 user 0m4.249s 00:09:14.688 sys 0m0.172s 00:09:14.688 22:49:50 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.688 22:49:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 ************************************ 00:09:14.688 END TEST event_perf 00:09:14.688 ************************************ 00:09:14.688 22:49:50 event -- common/autotest_common.sh@1142 -- # return 0 00:09:14.688 22:49:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:14.688 22:49:50 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:14.688 22:49:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.688 22:49:50 event -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 ************************************ 00:09:14.688 START TEST event_reactor 00:09:14.688 ************************************ 00:09:14.688 22:49:50 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:14.688 [2024-07-22 22:49:50.707830] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:14.688 [2024-07-22 22:49:50.707973] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749483 ] 00:09:14.688 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.688 [2024-07-22 22:49:50.841533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.688 [2024-07-22 22:49:50.995238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.069 test_start 00:09:16.069 oneshot 00:09:16.069 tick 100 00:09:16.069 tick 100 00:09:16.069 tick 250 00:09:16.069 tick 100 00:09:16.069 tick 100 00:09:16.069 tick 100 00:09:16.069 tick 250 00:09:16.069 tick 500 00:09:16.069 tick 100 00:09:16.069 tick 100 00:09:16.069 tick 250 00:09:16.069 tick 100 00:09:16.069 tick 100 00:09:16.069 test_end 00:09:16.069 00:09:16.069 real 0m1.427s 00:09:16.069 user 0m1.255s 00:09:16.069 sys 0m0.162s 00:09:16.069 22:49:52 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.069 22:49:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:16.069 ************************************ 00:09:16.069 END TEST event_reactor 00:09:16.069 ************************************ 00:09:16.069 22:49:52 event -- common/autotest_common.sh@1142 -- # return 0 00:09:16.069 22:49:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:16.069 22:49:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:16.069 22:49:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.069 22:49:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:16.069 ************************************ 00:09:16.069 START TEST event_reactor_perf 00:09:16.069 ************************************ 00:09:16.069 22:49:52 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:16.070 [2024-07-22 22:49:52.208243] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:16.070 [2024-07-22 22:49:52.208421] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749760 ] 00:09:16.070 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.070 [2024-07-22 22:49:52.345607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.330 [2024-07-22 22:49:52.499282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.710 test_start 00:09:17.710 test_end 00:09:17.710 Performance: 170840 events per second 00:09:17.710 00:09:17.710 real 0m1.425s 00:09:17.710 user 0m1.259s 00:09:17.710 sys 0m0.155s 00:09:17.710 22:49:53 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.710 22:49:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:17.710 ************************************ 00:09:17.710 END TEST event_reactor_perf 00:09:17.710 ************************************ 00:09:17.710 22:49:53 event -- common/autotest_common.sh@1142 -- # return 0 00:09:17.710 22:49:53 event -- event/event.sh@49 -- # uname -s 00:09:17.710 22:49:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:17.710 22:49:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:17.710 22:49:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:17.710 22:49:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.710 22:49:53 event -- common/autotest_common.sh@10 -- # set +x 00:09:17.710 ************************************ 00:09:17.710 START TEST event_scheduler 00:09:17.710 ************************************ 00:09:17.710 22:49:53 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:17.710 * Looking for test storage... 00:09:17.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:17.710 22:49:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:17.710 22:49:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=749946 00:09:17.710 22:49:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:17.710 22:49:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:17.710 22:49:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 749946 00:09:17.710 22:49:53 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 749946 ']' 00:09:17.710 22:49:53 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.710 22:49:53 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.710 22:49:53 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.710 22:49:53 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.710 22:49:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:17.710 [2024-07-22 22:49:53.858042] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:17.710 [2024-07-22 22:49:53.858211] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749946 ] 00:09:17.710 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.710 [2024-07-22 22:49:53.986446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.969 [2024-07-22 22:49:54.144711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.969 [2024-07-22 22:49:54.144745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.969 [2024-07-22 22:49:54.144805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.969 [2024-07-22 22:49:54.144810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.969 22:49:54 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.969 22:49:54 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:09:17.969 22:49:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:17.969 22:49:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.969 22:49:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:18.228 [2024-07-22 22:49:54.350388] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:18.228 [2024-07-22 22:49:54.350423] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:18.228 [2024-07-22 22:49:54.350445] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:18.228 [2024-07-22 22:49:54.350461] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:18.228 [2024-07-22 22:49:54.350474] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:18.228 22:49:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.228 22:49:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:18.228 22:49:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.228 22:49:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:18.228 [2024-07-22 22:49:54.530658] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:18.228 22:49:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.228 22:49:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:18.228 22:49:54 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.228 22:49:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.228 22:49:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 ************************************ 00:09:18.492 START TEST scheduler_create_thread 00:09:18.492 ************************************ 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 2 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 3 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 4 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 5 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 6 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 7 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 8 00:09:18.492 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 9 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 10 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.493 22:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.434 22:49:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.434 00:09:19.434 real 0m1.175s 00:09:19.434 user 0m0.014s 00:09:19.434 sys 0m0.005s 00:09:19.434 22:49:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.434 22:49:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.434 ************************************ 00:09:19.434 END TEST scheduler_create_thread 00:09:19.434 ************************************ 00:09:19.693 22:49:55 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:09:19.693 22:49:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:19.693 22:49:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 749946 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 749946 ']' 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 749946 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749946 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749946' 00:09:19.694 killing process with pid 749946 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 749946 00:09:19.694 22:49:55 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 749946 00:09:19.953 [2024-07-22 22:49:56.225886] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:20.522 00:09:20.522 real 0m2.906s 00:09:20.522 user 0m3.858s 00:09:20.522 sys 0m0.558s 00:09:20.522 22:49:56 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.522 22:49:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:20.522 ************************************ 00:09:20.522 END TEST event_scheduler 00:09:20.522 ************************************ 00:09:20.522 22:49:56 event -- common/autotest_common.sh@1142 -- # return 0 00:09:20.522 22:49:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:20.522 22:49:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:20.522 22:49:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.522 22:49:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.522 22:49:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:20.522 ************************************ 00:09:20.522 START TEST app_repeat 00:09:20.522 ************************************ 00:09:20.522 22:49:56 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=750269 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 750269' 00:09:20.522 Process app_repeat pid: 750269 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:20.522 spdk_app_start Round 0 00:09:20.522 22:49:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 750269 /var/tmp/spdk-nbd.sock 00:09:20.522 22:49:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 750269 ']' 00:09:20.522 22:49:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:20.522 22:49:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.522 22:49:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:20.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:20.522 22:49:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.522 22:49:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:20.522 [2024-07-22 22:49:56.715142] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:20.522 [2024-07-22 22:49:56.715213] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750269 ] 00:09:20.522 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.522 [2024-07-22 22:49:56.814601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.781 [2024-07-22 22:49:56.971129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.781 [2024-07-22 22:49:56.971165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.039 22:49:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.039 22:49:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:21.039 22:49:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.608 Malloc0 00:09:21.608 22:49:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.176 Malloc1 00:09:22.176 22:49:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.176 22:49:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:22.742 /dev/nbd0 00:09:22.742 22:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:22.742 22:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.742 1+0 records in 00:09:22.742 1+0 records out 00:09:22.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291933 s, 14.0 MB/s 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:22.742 22:49:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:22.742 22:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.742 22:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.742 22:49:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:23.001 /dev/nbd1 00:09:23.001 22:49:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:23.001 22:49:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:23.001 1+0 records in 00:09:23.001 1+0 records out 00:09:23.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313743 s, 13.1 MB/s 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:23.001 22:49:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:23.002 22:49:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:23.002 22:49:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.002 22:49:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.002 22:49:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.002 22:49:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.002 22:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:23.569 { 00:09:23.569 "nbd_device": "/dev/nbd0", 00:09:23.569 "bdev_name": "Malloc0" 00:09:23.569 }, 00:09:23.569 { 00:09:23.569 "nbd_device": "/dev/nbd1", 00:09:23.569 "bdev_name": "Malloc1" 00:09:23.569 } 00:09:23.569 ]' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:23.569 { 00:09:23.569 "nbd_device": "/dev/nbd0", 00:09:23.569 "bdev_name": "Malloc0" 00:09:23.569 }, 00:09:23.569 { 00:09:23.569 "nbd_device": "/dev/nbd1", 00:09:23.569 "bdev_name": "Malloc1" 00:09:23.569 } 00:09:23.569 ]' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:23.569 /dev/nbd1' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:23.569 /dev/nbd1' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:23.569 256+0 records in 00:09:23.569 256+0 records out 00:09:23.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101189 s, 104 MB/s 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:23.569 256+0 records in 00:09:23.569 256+0 records out 00:09:23.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029976 s, 35.0 MB/s 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:23.569 256+0 records in 00:09:23.569 256+0 records out 00:09:23.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320432 s, 32.7 MB/s 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.569 22:49:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.137 22:50:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.704 22:50:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:25.271 22:50:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:25.271 22:50:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:25.839 22:50:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:26.099 [2024-07-22 22:50:02.307969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:26.358 [2024-07-22 22:50:02.461387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.358 [2024-07-22 22:50:02.461387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.358 [2024-07-22 22:50:02.533152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:26.358 [2024-07-22 22:50:02.533232] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:28.887 22:50:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:28.887 22:50:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:28.887 spdk_app_start Round 1 00:09:28.887 22:50:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 750269 /var/tmp/spdk-nbd.sock 00:09:28.887 22:50:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 750269 ']' 00:09:28.887 22:50:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:28.887 22:50:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.887 22:50:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:28.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:28.887 22:50:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.887 22:50:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:29.453 22:50:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.453 22:50:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:29.453 22:50:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:30.020 Malloc0 00:09:30.020 22:50:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:30.587 Malloc1 00:09:30.587 22:50:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.587 22:50:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:30.588 22:50:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.588 22:50:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:30.588 22:50:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:30.588 22:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:30.588 22:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.588 22:50:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:31.155 /dev/nbd0 00:09:31.155 22:50:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:31.155 22:50:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.155 1+0 records in 00:09:31.155 1+0 records out 00:09:31.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245696 s, 16.7 MB/s 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:31.155 22:50:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:31.155 22:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.155 22:50:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.155 22:50:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:31.722 /dev/nbd1 00:09:31.722 22:50:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:31.722 22:50:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.722 1+0 records in 00:09:31.722 1+0 records out 00:09:31.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221801 s, 18.5 MB/s 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:31.722 22:50:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:31.722 22:50:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.722 22:50:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.981 22:50:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.981 22:50:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.981 22:50:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:32.240 { 00:09:32.240 "nbd_device": "/dev/nbd0", 00:09:32.240 "bdev_name": "Malloc0" 00:09:32.240 }, 00:09:32.240 { 00:09:32.240 "nbd_device": "/dev/nbd1", 00:09:32.240 "bdev_name": "Malloc1" 00:09:32.240 } 00:09:32.240 ]' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:32.240 { 00:09:32.240 "nbd_device": "/dev/nbd0", 00:09:32.240 "bdev_name": "Malloc0" 00:09:32.240 }, 00:09:32.240 { 00:09:32.240 "nbd_device": "/dev/nbd1", 00:09:32.240 "bdev_name": "Malloc1" 00:09:32.240 } 00:09:32.240 ]' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:32.240 /dev/nbd1' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:32.240 /dev/nbd1' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:32.240 256+0 records in 00:09:32.240 256+0 records out 00:09:32.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00544984 s, 192 MB/s 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:32.240 256+0 records in 00:09:32.240 256+0 records out 00:09:32.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029797 s, 35.2 MB/s 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.240 256+0 records in 00:09:32.240 256+0 records out 00:09:32.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323122 s, 32.5 MB/s 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.240 22:50:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.177 22:50:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.436 22:50:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.436 22:50:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.436 22:50:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.436 22:50:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.436 22:50:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:33.693 22:50:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:33.693 22:50:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:34.260 22:50:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:34.519 [2024-07-22 22:50:10.782836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.778 [2024-07-22 22:50:10.937102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.778 [2024-07-22 22:50:10.937108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.778 [2024-07-22 22:50:11.010583] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:34.778 [2024-07-22 22:50:11.010658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:37.317 22:50:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:37.317 22:50:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:37.317 spdk_app_start Round 2 00:09:37.317 22:50:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 750269 /var/tmp/spdk-nbd.sock 00:09:37.317 22:50:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 750269 ']' 00:09:37.317 22:50:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.317 22:50:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.317 22:50:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.317 22:50:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.317 22:50:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:37.883 22:50:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.883 22:50:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:37.883 22:50:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:38.450 Malloc0 00:09:38.450 22:50:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:39.017 Malloc1 00:09:39.017 22:50:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.017 22:50:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:39.584 /dev/nbd0 00:09:39.843 22:50:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:39.843 22:50:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.843 1+0 records in 00:09:39.843 1+0 records out 00:09:39.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264218 s, 15.5 MB/s 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.843 22:50:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:39.843 22:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.843 22:50:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.843 22:50:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:40.410 /dev/nbd1 00:09:40.410 22:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:40.410 22:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.410 1+0 records in 00:09:40.410 1+0 records out 00:09:40.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313866 s, 13.1 MB/s 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:40.410 22:50:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:40.410 22:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.410 22:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.410 22:50:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:40.410 22:50:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.410 22:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.977 22:50:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:40.977 { 00:09:40.977 "nbd_device": "/dev/nbd0", 00:09:40.977 "bdev_name": "Malloc0" 00:09:40.977 }, 00:09:40.977 { 00:09:40.977 "nbd_device": "/dev/nbd1", 00:09:40.977 "bdev_name": "Malloc1" 00:09:40.977 } 00:09:40.977 ]' 00:09:40.977 22:50:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:40.977 { 00:09:40.977 "nbd_device": "/dev/nbd0", 00:09:40.977 "bdev_name": "Malloc0" 00:09:40.977 }, 00:09:40.977 { 00:09:40.977 "nbd_device": "/dev/nbd1", 00:09:40.978 "bdev_name": "Malloc1" 00:09:40.978 } 00:09:40.978 ]' 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:40.978 /dev/nbd1' 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:40.978 /dev/nbd1' 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:40.978 256+0 records in 00:09:40.978 256+0 records out 00:09:40.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672633 s, 156 MB/s 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.978 22:50:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:41.236 256+0 records in 00:09:41.236 256+0 records out 00:09:41.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312391 s, 33.6 MB/s 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:41.236 256+0 records in 00:09:41.236 256+0 records out 00:09:41.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319888 s, 32.8 MB/s 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.236 22:50:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.495 22:50:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.061 22:50:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:42.319 22:50:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:42.319 22:50:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:42.887 22:50:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:43.146 [2024-07-22 22:50:19.268190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.146 [2024-07-22 22:50:19.424292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.146 [2024-07-22 22:50:19.424293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.405 [2024-07-22 22:50:19.497430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:43.405 [2024-07-22 22:50:19.497516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:45.935 22:50:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 750269 /var/tmp/spdk-nbd.sock 00:09:45.935 22:50:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 750269 ']' 00:09:45.935 22:50:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:45.935 22:50:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.935 22:50:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:45.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:45.935 22:50:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.935 22:50:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:46.502 22:50:22 event.app_repeat -- event/event.sh@39 -- # killprocess 750269 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 750269 ']' 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 750269 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 750269 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 750269' 00:09:46.502 killing process with pid 750269 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@967 -- # kill 750269 00:09:46.502 22:50:22 event.app_repeat -- common/autotest_common.sh@972 -- # wait 750269 00:09:46.762 spdk_app_start is called in Round 0. 00:09:46.762 Shutdown signal received, stop current app iteration 00:09:46.762 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 reinitialization... 00:09:46.762 spdk_app_start is called in Round 1. 00:09:46.762 Shutdown signal received, stop current app iteration 00:09:46.762 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 reinitialization... 00:09:46.762 spdk_app_start is called in Round 2. 00:09:46.762 Shutdown signal received, stop current app iteration 00:09:46.762 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 reinitialization... 00:09:46.762 spdk_app_start is called in Round 3. 00:09:46.762 Shutdown signal received, stop current app iteration 00:09:46.762 22:50:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:46.762 22:50:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:46.762 00:09:46.762 real 0m26.208s 00:09:46.762 user 1m0.645s 00:09:46.762 sys 0m5.673s 00:09:46.762 22:50:22 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.762 22:50:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.762 ************************************ 00:09:46.762 END TEST app_repeat 00:09:46.762 ************************************ 00:09:46.762 22:50:22 event -- common/autotest_common.sh@1142 -- # return 0 00:09:46.762 22:50:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:46.762 22:50:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:46.762 22:50:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:46.762 22:50:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.762 22:50:22 event -- common/autotest_common.sh@10 -- # set +x 00:09:46.762 ************************************ 00:09:46.762 START TEST cpu_locks 00:09:46.762 ************************************ 00:09:46.762 22:50:22 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:46.762 * Looking for test storage... 00:09:46.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:46.762 22:50:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:46.763 22:50:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:46.763 22:50:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:46.763 22:50:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:46.763 22:50:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:46.763 22:50:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.763 22:50:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.023 ************************************ 00:09:47.023 START TEST default_locks 00:09:47.023 ************************************ 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=753418 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 753418 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 753418 ']' 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.023 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.023 [2024-07-22 22:50:23.205281] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:47.023 [2024-07-22 22:50:23.205410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753418 ] 00:09:47.023 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.282 [2024-07-22 22:50:23.338001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.282 [2024-07-22 22:50:23.490494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.853 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.853 22:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:09:47.853 22:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 753418 00:09:47.853 22:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 753418 00:09:47.853 22:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:48.421 lslocks: write error 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 753418 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 753418 ']' 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 753418 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 753418 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 753418' 00:09:48.421 killing process with pid 753418 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 753418 00:09:48.421 22:50:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 753418 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 753418 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 753418 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 753418 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 753418 ']' 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (753418) - No such process 00:09:48.989 ERROR: process (pid: 753418) is no longer running 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:48.989 00:09:48.989 real 0m2.026s 00:09:48.989 user 0m2.013s 00:09:48.989 sys 0m0.985s 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.989 22:50:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.989 ************************************ 00:09:48.989 END TEST default_locks 00:09:48.989 ************************************ 00:09:48.989 22:50:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:48.989 22:50:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:48.990 22:50:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.990 22:50:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.990 22:50:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.990 ************************************ 00:09:48.990 START TEST default_locks_via_rpc 00:09:48.990 ************************************ 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=753708 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 753708 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 753708 ']' 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.990 22:50:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.250 [2024-07-22 22:50:25.319431] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:49.250 [2024-07-22 22:50:25.319602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753708 ] 00:09:49.250 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.250 [2024-07-22 22:50:25.456503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.510 [2024-07-22 22:50:25.609527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:49.769 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.770 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.770 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.770 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 753708 00:09:49.770 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 753708 00:09:49.770 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 753708 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 753708 ']' 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 753708 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 753708 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 753708' 00:09:50.337 killing process with pid 753708 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 753708 00:09:50.337 22:50:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 753708 00:09:50.905 00:09:50.905 real 0m1.902s 00:09:50.905 user 0m1.879s 00:09:50.905 sys 0m0.884s 00:09:50.905 22:50:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.905 22:50:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.905 ************************************ 00:09:50.905 END TEST default_locks_via_rpc 00:09:50.905 ************************************ 00:09:50.905 22:50:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:50.905 22:50:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:50.905 22:50:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:50.905 22:50:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.905 22:50:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.905 ************************************ 00:09:50.905 START TEST non_locking_app_on_locked_coremask 00:09:50.905 ************************************ 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=753996 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 753996 /var/tmp/spdk.sock 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 753996 ']' 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.905 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.165 [2024-07-22 22:50:27.292080] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:51.165 [2024-07-22 22:50:27.292251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753996 ] 00:09:51.165 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.165 [2024-07-22 22:50:27.427479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.425 [2024-07-22 22:50:27.580590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=754010 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 754010 /var/tmp/spdk2.sock 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 754010 ']' 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.995 22:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.995 22:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.995 [2024-07-22 22:50:28.118774] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:51.995 [2024-07-22 22:50:28.118987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754010 ] 00:09:51.995 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.995 [2024-07-22 22:50:28.292595] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:51.995 [2024-07-22 22:50:28.292675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.566 [2024-07-22 22:50:28.587419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.507 22:50:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.507 22:50:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:53.507 22:50:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 753996 00:09:53.507 22:50:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 753996 00:09:53.507 22:50:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:54.936 lslocks: write error 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 753996 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 753996 ']' 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 753996 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 753996 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 753996' 00:09:54.936 killing process with pid 753996 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 753996 00:09:54.936 22:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 753996 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 754010 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 754010 ']' 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 754010 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 754010 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 754010' 00:09:55.876 killing process with pid 754010 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 754010 00:09:55.876 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 754010 00:09:56.445 00:09:56.445 real 0m5.520s 00:09:56.445 user 0m6.230s 00:09:56.445 sys 0m2.007s 00:09:56.445 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.445 22:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.445 ************************************ 00:09:56.445 END TEST non_locking_app_on_locked_coremask 00:09:56.445 ************************************ 00:09:56.445 22:50:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:56.445 22:50:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:56.445 22:50:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.445 22:50:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.445 22:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:56.705 ************************************ 00:09:56.705 START TEST locking_app_on_unlocked_coremask 00:09:56.705 ************************************ 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=754582 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 754582 /var/tmp/spdk.sock 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 754582 ']' 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.705 22:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.705 [2024-07-22 22:50:32.883104] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:56.705 [2024-07-22 22:50:32.883291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754582 ] 00:09:56.705 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.965 [2024-07-22 22:50:33.025855] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:56.965 [2024-07-22 22:50:33.025938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.965 [2024-07-22 22:50:33.183430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=754711 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 754711 /var/tmp/spdk2.sock 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 754711 ']' 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:57.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.534 22:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:57.534 [2024-07-22 22:50:33.657859] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:09:57.534 [2024-07-22 22:50:33.657975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754711 ] 00:09:57.534 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.534 [2024-07-22 22:50:33.820474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.103 [2024-07-22 22:50:34.138568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.041 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.041 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:59.041 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 754711 00:09:59.041 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 754711 00:09:59.041 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:59.609 lslocks: write error 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 754582 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 754582 ']' 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 754582 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 754582 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 754582' 00:09:59.609 killing process with pid 754582 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 754582 00:09:59.609 22:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 754582 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 754711 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 754711 ']' 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 754711 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 754711 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 754711' 00:10:00.547 killing process with pid 754711 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 754711 00:10:00.547 22:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 754711 00:10:01.114 00:10:01.114 real 0m4.657s 00:10:01.114 user 0m5.219s 00:10:01.114 sys 0m1.576s 00:10:01.114 22:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.114 22:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.114 ************************************ 00:10:01.114 END TEST locking_app_on_unlocked_coremask 00:10:01.114 ************************************ 00:10:01.373 22:50:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:01.373 22:50:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:01.373 22:50:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:01.373 22:50:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.373 22:50:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.373 ************************************ 00:10:01.373 START TEST locking_app_on_locked_coremask 00:10:01.373 ************************************ 00:10:01.373 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:10:01.373 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=755142 00:10:01.373 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:01.373 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 755142 /var/tmp/spdk.sock 00:10:01.373 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 755142 ']' 00:10:01.373 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.373 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.374 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.374 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.374 22:50:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.374 [2024-07-22 22:50:37.609902] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:01.374 [2024-07-22 22:50:37.610010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755142 ] 00:10:01.374 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.634 [2024-07-22 22:50:37.743433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.634 [2024-07-22 22:50:37.902969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=755276 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 755276 /var/tmp/spdk2.sock 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 755276 /var/tmp/spdk2.sock 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 755276 /var/tmp/spdk2.sock 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 755276 ']' 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.205 22:50:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.205 [2024-07-22 22:50:38.432607] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:02.205 [2024-07-22 22:50:38.432796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755276 ] 00:10:02.205 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.464 [2024-07-22 22:50:38.639708] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 755142 has claimed it. 00:10:02.464 [2024-07-22 22:50:38.639834] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:03.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (755276) - No such process 00:10:03.399 ERROR: process (pid: 755276) is no longer running 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 755142 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 755142 00:10:03.399 22:50:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:03.967 lslocks: write error 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 755142 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 755142 ']' 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 755142 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755142 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755142' 00:10:03.967 killing process with pid 755142 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 755142 00:10:03.967 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 755142 00:10:04.535 00:10:04.535 real 0m3.263s 00:10:04.535 user 0m3.816s 00:10:04.535 sys 0m1.294s 00:10:04.535 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.535 22:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:04.535 ************************************ 00:10:04.535 END TEST locking_app_on_locked_coremask 00:10:04.535 ************************************ 00:10:04.535 22:50:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:04.535 22:50:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:04.535 22:50:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:04.535 22:50:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.535 22:50:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:04.535 ************************************ 00:10:04.535 START TEST locking_overlapped_coremask 00:10:04.535 ************************************ 00:10:04.535 22:50:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=755577 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 755577 /var/tmp/spdk.sock 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 755577 ']' 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.794 22:50:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:04.794 [2024-07-22 22:50:40.963033] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:04.794 [2024-07-22 22:50:40.963217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755577 ] 00:10:04.794 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.051 [2024-07-22 22:50:41.109381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.051 [2024-07-22 22:50:41.266029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.051 [2024-07-22 22:50:41.266089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.051 [2024-07-22 22:50:41.266094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=755699 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 755699 /var/tmp/spdk2.sock 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 755699 /var/tmp/spdk2.sock 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 755699 /var/tmp/spdk2.sock 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 755699 ']' 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:05.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.309 22:50:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.567 [2024-07-22 22:50:41.636832] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:05.567 [2024-07-22 22:50:41.636932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755699 ] 00:10:05.567 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.567 [2024-07-22 22:50:41.753687] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 755577 has claimed it. 00:10:05.567 [2024-07-22 22:50:41.753760] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:06.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (755699) - No such process 00:10:06.133 ERROR: process (pid: 755699) is no longer running 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 755577 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 755577 ']' 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 755577 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755577 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755577' 00:10:06.133 killing process with pid 755577 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 755577 00:10:06.133 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 755577 00:10:06.701 00:10:06.701 real 0m2.085s 00:10:06.701 user 0m5.417s 00:10:06.701 sys 0m0.673s 00:10:06.701 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.701 22:50:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 ************************************ 00:10:06.701 END TEST locking_overlapped_coremask 00:10:06.701 ************************************ 00:10:06.701 22:50:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:06.701 22:50:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:06.701 22:50:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:06.701 22:50:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.701 22:50:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 ************************************ 00:10:06.701 START TEST locking_overlapped_coremask_via_rpc 00:10:06.701 ************************************ 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=755876 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 755876 /var/tmp/spdk.sock 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 755876 ']' 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.701 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.961 [2024-07-22 22:50:43.107466] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:06.961 [2024-07-22 22:50:43.107572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755876 ] 00:10:06.961 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.961 [2024-07-22 22:50:43.236020] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:06.961 [2024-07-22 22:50:43.236101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.218 [2024-07-22 22:50:43.392615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.218 [2024-07-22 22:50:43.392677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.218 [2024-07-22 22:50:43.392681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=755898 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 755898 /var/tmp/spdk2.sock 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 755898 ']' 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:07.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.476 22:50:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.734 [2024-07-22 22:50:43.799082] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:07.734 [2024-07-22 22:50:43.799256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755898 ] 00:10:07.734 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.734 [2024-07-22 22:50:43.940038] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:07.734 [2024-07-22 22:50:43.940084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.993 [2024-07-22 22:50:44.165305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.993 [2024-07-22 22:50:44.168367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:07.993 [2024-07-22 22:50:44.168370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.368 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.368 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.369 [2024-07-22 22:50:45.303452] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 755876 has claimed it. 00:10:09.369 request: 00:10:09.369 { 00:10:09.369 "method": "framework_enable_cpumask_locks", 00:10:09.369 "req_id": 1 00:10:09.369 } 00:10:09.369 Got JSON-RPC error response 00:10:09.369 response: 00:10:09.369 { 00:10:09.369 "code": -32603, 00:10:09.369 "message": "Failed to claim CPU core: 2" 00:10:09.369 } 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 755876 /var/tmp/spdk.sock 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 755876 ']' 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.369 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 755898 /var/tmp/spdk2.sock 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 755898 ']' 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.627 22:50:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.194 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.194 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:10.194 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:10.195 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:10.195 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:10.195 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:10.195 00:10:10.195 real 0m3.305s 00:10:10.195 user 0m2.210s 00:10:10.195 sys 0m0.308s 00:10:10.195 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.195 22:50:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.195 ************************************ 00:10:10.195 END TEST locking_overlapped_coremask_via_rpc 00:10:10.195 ************************************ 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:10.195 22:50:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:10.195 22:50:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 755876 ]] 00:10:10.195 22:50:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 755876 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 755876 ']' 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 755876 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755876 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755876' 00:10:10.195 killing process with pid 755876 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 755876 00:10:10.195 22:50:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 755876 00:10:10.768 22:50:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 755898 ]] 00:10:10.768 22:50:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 755898 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 755898 ']' 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 755898 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755898 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755898' 00:10:10.768 killing process with pid 755898 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 755898 00:10:10.768 22:50:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 755898 00:10:11.367 22:50:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.367 22:50:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:11.367 22:50:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 755876 ]] 00:10:11.367 22:50:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 755876 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 755876 ']' 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 755876 00:10:11.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (755876) - No such process 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 755876 is not found' 00:10:11.367 Process with pid 755876 is not found 00:10:11.367 22:50:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 755898 ]] 00:10:11.367 22:50:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 755898 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 755898 ']' 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 755898 00:10:11.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (755898) - No such process 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 755898 is not found' 00:10:11.367 Process with pid 755898 is not found 00:10:11.367 22:50:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.367 00:10:11.367 real 0m24.657s 00:10:11.367 user 0m44.286s 00:10:11.367 sys 0m9.079s 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.367 22:50:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.367 ************************************ 00:10:11.367 END TEST cpu_locks 00:10:11.367 ************************************ 00:10:11.367 22:50:47 event -- common/autotest_common.sh@1142 -- # return 0 00:10:11.367 00:10:11.367 real 0m58.649s 00:10:11.367 user 1m55.775s 00:10:11.367 sys 0m16.199s 00:10:11.367 22:50:47 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.367 22:50:47 event -- common/autotest_common.sh@10 -- # set +x 00:10:11.367 ************************************ 00:10:11.367 END TEST event 00:10:11.367 ************************************ 00:10:11.627 22:50:47 -- common/autotest_common.sh@1142 -- # return 0 00:10:11.627 22:50:47 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:11.627 22:50:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:11.627 22:50:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.627 22:50:47 -- common/autotest_common.sh@10 -- # set +x 00:10:11.627 ************************************ 00:10:11.627 START TEST thread 00:10:11.627 ************************************ 00:10:11.627 22:50:47 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:11.627 * Looking for test storage... 00:10:11.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:11.628 22:50:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:11.628 22:50:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:11.628 22:50:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.628 22:50:47 thread -- common/autotest_common.sh@10 -- # set +x 00:10:11.628 ************************************ 00:10:11.628 START TEST thread_poller_perf 00:10:11.628 ************************************ 00:10:11.628 22:50:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:11.628 [2024-07-22 22:50:47.855424] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:11.628 [2024-07-22 22:50:47.855504] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756517 ] 00:10:11.628 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.888 [2024-07-22 22:50:47.987370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.888 [2024-07-22 22:50:48.142415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.888 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:13.268 ====================================== 00:10:13.268 busy:2723687068 (cyc) 00:10:13.268 total_run_count: 143000 00:10:13.268 tsc_hz: 2700000000 (cyc) 00:10:13.268 ====================================== 00:10:13.268 poller_cost: 19046 (cyc), 7054 (nsec) 00:10:13.268 00:10:13.268 real 0m1.440s 00:10:13.268 user 0m1.269s 00:10:13.268 sys 0m0.158s 00:10:13.268 22:50:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.268 22:50:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.268 ************************************ 00:10:13.268 END TEST thread_poller_perf 00:10:13.268 ************************************ 00:10:13.268 22:50:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:13.268 22:50:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.268 22:50:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:13.268 22:50:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.268 22:50:49 thread -- common/autotest_common.sh@10 -- # set +x 00:10:13.268 ************************************ 00:10:13.268 START TEST thread_poller_perf 00:10:13.268 ************************************ 00:10:13.268 22:50:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.268 [2024-07-22 22:50:49.347553] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:13.268 [2024-07-22 22:50:49.347619] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756669 ] 00:10:13.268 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.268 [2024-07-22 22:50:49.481469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.528 [2024-07-22 22:50:49.639715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.528 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:14.467 ====================================== 00:10:14.467 busy:2706030020 (cyc) 00:10:14.467 total_run_count: 1952000 00:10:14.467 tsc_hz: 2700000000 (cyc) 00:10:14.467 ====================================== 00:10:14.467 poller_cost: 1386 (cyc), 513 (nsec) 00:10:14.467 00:10:14.467 real 0m1.429s 00:10:14.467 user 0m1.265s 00:10:14.467 sys 0m0.151s 00:10:14.467 22:50:50 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.467 22:50:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:14.467 ************************************ 00:10:14.467 END TEST thread_poller_perf 00:10:14.467 ************************************ 00:10:14.727 22:50:50 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:14.727 22:50:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:14.727 00:10:14.727 real 0m3.069s 00:10:14.727 user 0m2.612s 00:10:14.727 sys 0m0.448s 00:10:14.727 22:50:50 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.727 22:50:50 thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.727 ************************************ 00:10:14.727 END TEST thread 00:10:14.727 ************************************ 00:10:14.727 22:50:50 -- common/autotest_common.sh@1142 -- # return 0 00:10:14.727 22:50:50 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:10:14.727 22:50:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:14.727 22:50:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.727 22:50:50 -- common/autotest_common.sh@10 -- # set +x 00:10:14.727 ************************************ 00:10:14.727 START TEST accel 00:10:14.727 ************************************ 00:10:14.727 22:50:50 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:10:14.727 * Looking for test storage... 00:10:14.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:10:14.727 22:50:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:10:14.727 22:50:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:10:14.727 22:50:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:14.727 22:50:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=756869 00:10:14.727 22:50:50 accel -- accel/accel.sh@63 -- # waitforlisten 756869 00:10:14.727 22:50:50 accel -- common/autotest_common.sh@829 -- # '[' -z 756869 ']' 00:10:14.727 22:50:50 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.727 22:50:50 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:14.727 22:50:50 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.727 22:50:50 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.727 22:50:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:14.727 22:50:50 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.727 22:50:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:14.727 22:50:50 accel -- common/autotest_common.sh@10 -- # set +x 00:10:14.727 22:50:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:14.727 22:50:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.727 22:50:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.727 22:50:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:14.727 22:50:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:14.727 22:50:50 accel -- accel/accel.sh@41 -- # jq -r . 00:10:14.727 [2024-07-22 22:50:51.022960] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:14.727 [2024-07-22 22:50:51.023048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756869 ] 00:10:14.988 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.988 [2024-07-22 22:50:51.117893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.988 [2024-07-22 22:50:51.266410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@862 -- # return 0 00:10:15.928 22:50:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:15.928 22:50:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:15.928 22:50:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:15.928 22:50:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:10:15.928 22:50:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:15.928 22:50:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.928 22:50:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@10 -- # set +x 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # IFS== 00:10:15.928 22:50:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:15.928 22:50:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:15.928 22:50:52 accel -- accel/accel.sh@75 -- # killprocess 756869 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@948 -- # '[' -z 756869 ']' 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@952 -- # kill -0 756869 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@953 -- # uname 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 756869 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 756869' 00:10:15.928 killing process with pid 756869 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@967 -- # kill 756869 00:10:15.928 22:50:52 accel -- common/autotest_common.sh@972 -- # wait 756869 00:10:16.498 22:50:52 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:16.498 22:50:52 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:10:16.498 22:50:52 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:16.498 22:50:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.498 22:50:52 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.498 22:50:52 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:10:16.498 22:50:52 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:10:16.758 22:50:52 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.758 22:50:52 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:10:16.758 22:50:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:16.758 22:50:52 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:16.758 22:50:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:16.758 22:50:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.758 22:50:52 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.758 ************************************ 00:10:16.758 START TEST accel_missing_filename 00:10:16.758 ************************************ 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.758 22:50:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:10:16.758 22:50:52 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:10:16.758 [2024-07-22 22:50:52.927939] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:16.758 [2024-07-22 22:50:52.928079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757165 ] 00:10:16.758 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.758 [2024-07-22 22:50:53.063771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.018 [2024-07-22 22:50:53.221215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.018 [2024-07-22 22:50:53.328101] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:17.277 [2024-07-22 22:50:53.433623] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:10:17.277 A filename is required. 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:17.277 00:10:17.277 real 0m0.648s 00:10:17.277 user 0m0.449s 00:10:17.277 sys 0m0.272s 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.277 22:50:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:10:17.277 ************************************ 00:10:17.277 END TEST accel_missing_filename 00:10:17.277 ************************************ 00:10:17.277 22:50:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:17.277 22:50:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:17.277 22:50:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:17.277 22:50:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.277 22:50:53 accel -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 ************************************ 00:10:17.537 START TEST accel_compress_verify 00:10:17.537 ************************************ 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.537 22:50:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:17.537 22:50:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:10:17.538 [2024-07-22 22:50:53.654148] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:17.538 [2024-07-22 22:50:53.654295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757309 ] 00:10:17.538 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.538 [2024-07-22 22:50:53.789329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.797 [2024-07-22 22:50:53.945153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.797 [2024-07-22 22:50:54.049607] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:18.057 [2024-07-22 22:50:54.167831] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:10:18.057 00:10:18.057 Compression does not support the verify option, aborting. 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:18.057 00:10:18.057 real 0m0.681s 00:10:18.057 user 0m0.488s 00:10:18.057 sys 0m0.271s 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.057 22:50:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:18.057 ************************************ 00:10:18.057 END TEST accel_compress_verify 00:10:18.057 ************************************ 00:10:18.057 22:50:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:18.057 22:50:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:18.057 22:50:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:18.057 22:50:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.057 22:50:54 accel -- common/autotest_common.sh@10 -- # set +x 00:10:18.317 ************************************ 00:10:18.317 START TEST accel_wrong_workload 00:10:18.317 ************************************ 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.317 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:10:18.317 22:50:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:10:18.317 Unsupported workload type: foobar 00:10:18.317 [2024-07-22 22:50:54.407937] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:18.317 accel_perf options: 00:10:18.317 [-h help message] 00:10:18.317 [-q queue depth per core] 00:10:18.318 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:18.318 [-T number of threads per core 00:10:18.318 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:18.318 [-t time in seconds] 00:10:18.318 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:18.318 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:18.318 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:18.318 [-l for compress/decompress workloads, name of uncompressed input file 00:10:18.318 [-S for crc32c workload, use this seed value (default 0) 00:10:18.318 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:18.318 [-f for fill workload, use this BYTE value (default 255) 00:10:18.318 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:18.318 [-y verify result if this switch is on] 00:10:18.318 [-a tasks to allocate per core (default: same value as -q)] 00:10:18.318 Can be used to spread operations across a wider range of memory. 00:10:18.318 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:10:18.318 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:18.318 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:18.318 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:18.318 00:10:18.318 real 0m0.038s 00:10:18.318 user 0m0.021s 00:10:18.318 sys 0m0.017s 00:10:18.318 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.318 22:50:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:18.318 ************************************ 00:10:18.318 END TEST accel_wrong_workload 00:10:18.318 ************************************ 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:18.318 22:50:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@10 -- # set +x 00:10:18.318 Error: writing output failed: Broken pipe 00:10:18.318 ************************************ 00:10:18.318 START TEST accel_negative_buffers 00:10:18.318 ************************************ 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:10:18.318 22:50:54 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:10:18.318 -x option must be non-negative. 00:10:18.318 [2024-07-22 22:50:54.510575] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:18.318 accel_perf options: 00:10:18.318 [-h help message] 00:10:18.318 [-q queue depth per core] 00:10:18.318 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:18.318 [-T number of threads per core 00:10:18.318 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:18.318 [-t time in seconds] 00:10:18.318 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:18.318 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:18.318 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:18.318 [-l for compress/decompress workloads, name of uncompressed input file 00:10:18.318 [-S for crc32c workload, use this seed value (default 0) 00:10:18.318 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:18.318 [-f for fill workload, use this BYTE value (default 255) 00:10:18.318 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:18.318 [-y verify result if this switch is on] 00:10:18.318 [-a tasks to allocate per core (default: same value as -q)] 00:10:18.318 Can be used to spread operations across a wider range of memory. 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:18.318 00:10:18.318 real 0m0.025s 00:10:18.318 user 0m0.012s 00:10:18.318 sys 0m0.013s 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.318 22:50:54 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:18.318 ************************************ 00:10:18.318 END TEST accel_negative_buffers 00:10:18.318 ************************************ 00:10:18.318 Error: writing output failed: Broken pipe 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:18.318 22:50:54 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.318 22:50:54 accel -- common/autotest_common.sh@10 -- # set +x 00:10:18.318 ************************************ 00:10:18.318 START TEST accel_crc32c 00:10:18.318 ************************************ 00:10:18.318 22:50:54 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:18.318 22:50:54 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:18.318 [2024-07-22 22:50:54.613259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:18.318 [2024-07-22 22:50:54.613440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757389 ] 00:10:18.578 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.578 [2024-07-22 22:50:54.746729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.839 [2024-07-22 22:50:54.900224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.839 22:50:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:18.839 22:50:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:20.221 22:50:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.221 00:10:20.221 real 0m1.664s 00:10:20.221 user 0m1.412s 00:10:20.221 sys 0m0.255s 00:10:20.221 22:50:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.221 22:50:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:20.221 ************************************ 00:10:20.221 END TEST accel_crc32c 00:10:20.221 ************************************ 00:10:20.221 22:50:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:20.221 22:50:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:20.221 22:50:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:20.221 22:50:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.221 22:50:56 accel -- common/autotest_common.sh@10 -- # set +x 00:10:20.221 ************************************ 00:10:20.221 START TEST accel_crc32c_C2 00:10:20.221 ************************************ 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:20.221 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:20.221 [2024-07-22 22:50:56.350628] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:20.221 [2024-07-22 22:50:56.350773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757660 ] 00:10:20.221 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.221 [2024-07-22 22:50:56.491943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.482 [2024-07-22 22:50:56.651579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.482 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.483 22:50:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:21.863 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:21.864 00:10:21.864 real 0m1.680s 00:10:21.864 user 0m1.419s 00:10:21.864 sys 0m0.262s 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.864 22:50:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:21.864 ************************************ 00:10:21.864 END TEST accel_crc32c_C2 00:10:21.864 ************************************ 00:10:21.864 22:50:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:21.864 22:50:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:21.864 22:50:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:21.864 22:50:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.864 22:50:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:21.864 ************************************ 00:10:21.864 START TEST accel_copy 00:10:21.864 ************************************ 00:10:21.864 22:50:58 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:21.864 22:50:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:21.864 [2024-07-22 22:50:58.110405] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:21.864 [2024-07-22 22:50:58.110552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757819 ] 00:10:22.124 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.124 [2024-07-22 22:50:58.249991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.124 [2024-07-22 22:50:58.402403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:22.385 22:50:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:23.767 22:50:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.767 00:10:23.767 real 0m1.672s 00:10:23.767 user 0m1.405s 00:10:23.767 sys 0m0.266s 00:10:23.767 22:50:59 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.767 22:50:59 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:23.767 ************************************ 00:10:23.767 END TEST accel_copy 00:10:23.767 ************************************ 00:10:23.767 22:50:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:23.767 22:50:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:23.767 22:50:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:23.767 22:50:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.767 22:50:59 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.767 ************************************ 00:10:23.767 START TEST accel_fill 00:10:23.767 ************************************ 00:10:23.767 22:50:59 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:23.767 22:50:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:23.767 [2024-07-22 22:50:59.857672] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:23.767 [2024-07-22 22:50:59.857814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758092 ] 00:10:23.767 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.767 [2024-07-22 22:50:59.993517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.028 [2024-07-22 22:51:00.155918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:24.028 22:51:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:25.420 22:51:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.420 00:10:25.420 real 0m1.674s 00:10:25.420 user 0m1.414s 00:10:25.420 sys 0m0.260s 00:10:25.420 22:51:01 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.420 22:51:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:10:25.420 ************************************ 00:10:25.420 END TEST accel_fill 00:10:25.420 ************************************ 00:10:25.420 22:51:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:25.420 22:51:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:25.420 22:51:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:25.420 22:51:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.420 22:51:01 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.420 ************************************ 00:10:25.420 START TEST accel_copy_crc32c 00:10:25.420 ************************************ 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:25.420 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:25.420 [2024-07-22 22:51:01.603197] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:25.420 [2024-07-22 22:51:01.603368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758315 ] 00:10:25.420 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.681 [2024-07-22 22:51:01.737390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.681 [2024-07-22 22:51:01.896395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.944 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:25.945 22:51:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:27.324 00:10:27.324 real 0m1.664s 00:10:27.324 user 0m1.398s 00:10:27.324 sys 0m0.267s 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.324 22:51:03 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 ************************************ 00:10:27.324 END TEST accel_copy_crc32c 00:10:27.324 ************************************ 00:10:27.324 22:51:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:27.324 22:51:03 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:27.324 22:51:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:27.324 22:51:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.324 22:51:03 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 ************************************ 00:10:27.324 START TEST accel_copy_crc32c_C2 00:10:27.324 ************************************ 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:27.324 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:27.324 [2024-07-22 22:51:03.343234] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:27.324 [2024-07-22 22:51:03.343396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758642 ] 00:10:27.324 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.324 [2024-07-22 22:51:03.478591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.324 [2024-07-22 22:51:03.632274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:27.585 22:51:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.022 00:10:29.022 real 0m1.668s 00:10:29.022 user 0m1.408s 00:10:29.022 sys 0m0.260s 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.022 22:51:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:29.022 ************************************ 00:10:29.022 END TEST accel_copy_crc32c_C2 00:10:29.022 ************************************ 00:10:29.022 22:51:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:29.022 22:51:05 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:29.022 22:51:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:29.022 22:51:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.022 22:51:05 accel -- common/autotest_common.sh@10 -- # set +x 00:10:29.022 ************************************ 00:10:29.022 START TEST accel_dualcast 00:10:29.022 ************************************ 00:10:29.022 22:51:05 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:29.022 22:51:05 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:29.022 [2024-07-22 22:51:05.063972] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:29.022 [2024-07-22 22:51:05.064038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758799 ] 00:10:29.022 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.022 [2024-07-22 22:51:05.163467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.022 [2024-07-22 22:51:05.318441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.283 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:29.284 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.284 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.284 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:29.284 22:51:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:29.284 22:51:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:29.284 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:29.284 22:51:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:30.663 22:51:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:30.664 22:51:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:30.664 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:30.664 22:51:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:30.664 22:51:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:30.664 22:51:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:30.664 22:51:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.664 00:10:30.664 real 0m1.619s 00:10:30.664 user 0m1.393s 00:10:30.664 sys 0m0.237s 00:10:30.664 22:51:06 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.664 22:51:06 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:30.664 ************************************ 00:10:30.664 END TEST accel_dualcast 00:10:30.664 ************************************ 00:10:30.664 22:51:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:30.664 22:51:06 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:30.664 22:51:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:30.664 22:51:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.664 22:51:06 accel -- common/autotest_common.sh@10 -- # set +x 00:10:30.664 ************************************ 00:10:30.664 START TEST accel_compare 00:10:30.664 ************************************ 00:10:30.664 22:51:06 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:30.664 22:51:06 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:30.664 [2024-07-22 22:51:06.775812] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:30.664 [2024-07-22 22:51:06.775957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759073 ] 00:10:30.664 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.664 [2024-07-22 22:51:06.913078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.925 [2024-07-22 22:51:07.072502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:30.925 22:51:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:32.306 22:51:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.306 00:10:32.306 real 0m1.657s 00:10:32.306 user 0m1.395s 00:10:32.306 sys 0m0.275s 00:10:32.306 22:51:08 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.306 22:51:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:32.306 ************************************ 00:10:32.306 END TEST accel_compare 00:10:32.306 ************************************ 00:10:32.306 22:51:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:32.306 22:51:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:32.306 22:51:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:32.306 22:51:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.306 22:51:08 accel -- common/autotest_common.sh@10 -- # set +x 00:10:32.306 ************************************ 00:10:32.306 START TEST accel_xor 00:10:32.306 ************************************ 00:10:32.306 22:51:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:32.306 22:51:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:32.306 [2024-07-22 22:51:08.508376] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:32.306 [2024-07-22 22:51:08.508521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759356 ] 00:10:32.306 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.566 [2024-07-22 22:51:08.640084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.566 [2024-07-22 22:51:08.799058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.826 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:32.827 22:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:34.208 00:10:34.208 real 0m1.669s 00:10:34.208 user 0m1.414s 00:10:34.208 sys 0m0.261s 00:10:34.208 22:51:10 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.208 22:51:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:34.208 ************************************ 00:10:34.208 END TEST accel_xor 00:10:34.208 ************************************ 00:10:34.208 22:51:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:34.208 22:51:10 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:34.208 22:51:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:34.208 22:51:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.208 22:51:10 accel -- common/autotest_common.sh@10 -- # set +x 00:10:34.208 ************************************ 00:10:34.208 START TEST accel_xor 00:10:34.208 ************************************ 00:10:34.208 22:51:10 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:34.208 22:51:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:34.208 [2024-07-22 22:51:10.253490] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:34.208 [2024-07-22 22:51:10.253607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759895 ] 00:10:34.208 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.208 [2024-07-22 22:51:10.377425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.469 [2024-07-22 22:51:10.533246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:34.469 22:51:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:35.851 22:51:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.851 00:10:35.851 real 0m1.661s 00:10:35.851 user 0m1.399s 00:10:35.851 sys 0m0.256s 00:10:35.851 22:51:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.851 22:51:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:35.851 ************************************ 00:10:35.851 END TEST accel_xor 00:10:35.851 ************************************ 00:10:35.851 22:51:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:35.851 22:51:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:35.851 22:51:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:35.851 22:51:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.851 22:51:11 accel -- common/autotest_common.sh@10 -- # set +x 00:10:35.851 ************************************ 00:10:35.851 START TEST accel_dif_verify 00:10:35.851 ************************************ 00:10:35.851 22:51:11 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:35.851 22:51:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:35.851 [2024-07-22 22:51:11.991651] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:35.851 [2024-07-22 22:51:11.991790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760173 ] 00:10:35.851 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.851 [2024-07-22 22:51:12.134225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.112 [2024-07-22 22:51:12.286853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.112 22:51:12 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:36.113 22:51:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:37.493 22:51:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.493 00:10:37.493 real 0m1.683s 00:10:37.493 user 0m1.428s 00:10:37.493 sys 0m0.252s 00:10:37.493 22:51:13 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.493 22:51:13 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:37.493 ************************************ 00:10:37.493 END TEST accel_dif_verify 00:10:37.493 ************************************ 00:10:37.493 22:51:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:37.493 22:51:13 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:37.493 22:51:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:37.493 22:51:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.493 22:51:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:37.493 ************************************ 00:10:37.493 START TEST accel_dif_generate 00:10:37.493 ************************************ 00:10:37.493 22:51:13 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:37.493 22:51:13 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:37.494 22:51:13 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:37.494 [2024-07-22 22:51:13.752307] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:37.494 [2024-07-22 22:51:13.752476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760342 ] 00:10:37.754 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.754 [2024-07-22 22:51:13.887924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.754 [2024-07-22 22:51:14.045324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:38.015 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:38.016 22:51:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:39.396 22:51:15 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.396 00:10:39.396 real 0m1.678s 00:10:39.396 user 0m1.409s 00:10:39.396 sys 0m0.267s 00:10:39.396 22:51:15 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.396 22:51:15 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:39.396 ************************************ 00:10:39.396 END TEST accel_dif_generate 00:10:39.396 ************************************ 00:10:39.396 22:51:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:39.396 22:51:15 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:39.396 22:51:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:39.396 22:51:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.396 22:51:15 accel -- common/autotest_common.sh@10 -- # set +x 00:10:39.396 ************************************ 00:10:39.396 START TEST accel_dif_generate_copy 00:10:39.396 ************************************ 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.396 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:39.397 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:39.397 [2024-07-22 22:51:15.492629] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:39.397 [2024-07-22 22:51:15.492739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760609 ] 00:10:39.397 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.397 [2024-07-22 22:51:15.619413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.656 [2024-07-22 22:51:15.778988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.656 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:39.657 22:51:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.034 00:10:41.034 real 0m1.665s 00:10:41.034 user 0m1.391s 00:10:41.034 sys 0m0.269s 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.034 22:51:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:41.034 ************************************ 00:10:41.034 END TEST accel_dif_generate_copy 00:10:41.034 ************************************ 00:10:41.034 22:51:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:41.034 22:51:17 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:41.034 22:51:17 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:41.034 22:51:17 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:41.034 22:51:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.034 22:51:17 accel -- common/autotest_common.sh@10 -- # set +x 00:10:41.034 ************************************ 00:10:41.034 START TEST accel_comp 00:10:41.034 ************************************ 00:10:41.034 22:51:17 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.034 22:51:17 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:41.035 22:51:17 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:41.035 22:51:17 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:41.035 [2024-07-22 22:51:17.240042] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:41.035 [2024-07-22 22:51:17.240203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760762 ] 00:10:41.035 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.294 [2024-07-22 22:51:17.373818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.294 [2024-07-22 22:51:17.531404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:41.554 22:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:42.932 22:51:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.932 00:10:42.932 real 0m1.670s 00:10:42.932 user 0m1.414s 00:10:42.932 sys 0m0.251s 00:10:42.932 22:51:18 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.932 22:51:18 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:42.932 ************************************ 00:10:42.932 END TEST accel_comp 00:10:42.932 ************************************ 00:10:42.932 22:51:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:42.932 22:51:18 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:42.932 22:51:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:42.932 22:51:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.932 22:51:18 accel -- common/autotest_common.sh@10 -- # set +x 00:10:42.932 ************************************ 00:10:42.932 START TEST accel_decomp 00:10:42.932 ************************************ 00:10:42.932 22:51:18 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:42.932 22:51:18 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:42.932 [2024-07-22 22:51:18.981642] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:42.932 [2024-07-22 22:51:18.981787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761044 ] 00:10:42.932 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.932 [2024-07-22 22:51:19.120564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.192 [2024-07-22 22:51:19.280515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.192 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:43.193 22:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:44.572 22:51:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.572 00:10:44.572 real 0m1.681s 00:10:44.572 user 0m1.423s 00:10:44.572 sys 0m0.254s 00:10:44.572 22:51:20 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.572 22:51:20 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:44.572 ************************************ 00:10:44.572 END TEST accel_decomp 00:10:44.572 ************************************ 00:10:44.572 22:51:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:44.572 22:51:20 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:44.572 22:51:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:44.572 22:51:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.572 22:51:20 accel -- common/autotest_common.sh@10 -- # set +x 00:10:44.572 ************************************ 00:10:44.572 START TEST accel_decomp_full 00:10:44.572 ************************************ 00:10:44.572 22:51:20 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:44.572 22:51:20 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:44.572 [2024-07-22 22:51:20.739687] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:44.572 [2024-07-22 22:51:20.739832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761203 ] 00:10:44.572 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.572 [2024-07-22 22:51:20.877104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.831 [2024-07-22 22:51:21.009795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.831 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:44.832 22:51:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:46.210 22:51:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.210 00:10:46.210 real 0m1.681s 00:10:46.210 user 0m1.421s 00:10:46.210 sys 0m0.255s 00:10:46.210 22:51:22 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.210 22:51:22 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:46.210 ************************************ 00:10:46.210 END TEST accel_decomp_full 00:10:46.210 ************************************ 00:10:46.210 22:51:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:46.210 22:51:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:46.210 22:51:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:46.210 22:51:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.210 22:51:22 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.210 ************************************ 00:10:46.210 START TEST accel_decomp_mcore 00:10:46.210 ************************************ 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:46.210 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.211 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.211 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.211 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.211 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:46.211 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:46.211 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:46.211 [2024-07-22 22:51:22.497472] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:46.211 [2024-07-22 22:51:22.497619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761496 ] 00:10:46.471 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.471 [2024-07-22 22:51:22.633436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.731 [2024-07-22 22:51:22.797957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.731 [2024-07-22 22:51:22.798017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.731 [2024-07-22 22:51:22.798077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.731 [2024-07-22 22:51:22.798081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.731 22:51:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:48.144 00:10:48.144 real 0m1.652s 00:10:48.144 user 0m0.023s 00:10:48.144 sys 0m0.002s 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.144 22:51:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:48.144 ************************************ 00:10:48.144 END TEST accel_decomp_mcore 00:10:48.144 ************************************ 00:10:48.144 22:51:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:48.144 22:51:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.145 22:51:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:48.145 22:51:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.145 22:51:24 accel -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 ************************************ 00:10:48.145 START TEST accel_decomp_full_mcore 00:10:48.145 ************************************ 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:48.145 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:48.145 [2024-07-22 22:51:24.226462] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:48.145 [2024-07-22 22:51:24.226541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761658 ] 00:10:48.145 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.145 [2024-07-22 22:51:24.364373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.404 [2024-07-22 22:51:24.528323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.404 [2024-07-22 22:51:24.528356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.404 [2024-07-22 22:51:24.528415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.404 [2024-07-22 22:51:24.528420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.404 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:48.405 22:51:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.783 00:10:49.783 real 0m1.662s 00:10:49.783 user 0m5.004s 00:10:49.783 sys 0m0.264s 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.783 22:51:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:49.783 ************************************ 00:10:49.783 END TEST accel_decomp_full_mcore 00:10:49.783 ************************************ 00:10:49.783 22:51:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:49.783 22:51:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:49.783 22:51:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:49.783 22:51:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.783 22:51:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.783 ************************************ 00:10:49.783 START TEST accel_decomp_mthread 00:10:49.783 ************************************ 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:49.783 22:51:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:49.783 [2024-07-22 22:51:25.968253] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:49.783 [2024-07-22 22:51:25.968419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761938 ] 00:10:49.783 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.043 [2024-07-22 22:51:26.102128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.043 [2024-07-22 22:51:26.262539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.303 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.304 22:51:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.685 00:10:51.685 real 0m1.669s 00:10:51.685 user 0m1.416s 00:10:51.685 sys 0m0.251s 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.685 22:51:27 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:51.685 ************************************ 00:10:51.685 END TEST accel_decomp_mthread 00:10:51.685 ************************************ 00:10:51.685 22:51:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:51.685 22:51:27 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:51.685 22:51:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:51.685 22:51:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.685 22:51:27 accel -- common/autotest_common.sh@10 -- # set +x 00:10:51.685 ************************************ 00:10:51.685 START TEST accel_decomp_full_mthread 00:10:51.685 ************************************ 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.685 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:51.686 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:51.686 22:51:27 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:51.686 [2024-07-22 22:51:27.716814] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:51.686 [2024-07-22 22:51:27.716957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762101 ] 00:10:51.686 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.686 [2024-07-22 22:51:27.854127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.946 [2024-07-22 22:51:28.014337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.946 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:51.947 22:51:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:53.326 00:10:53.326 real 0m1.729s 00:10:53.326 user 0m1.452s 00:10:53.326 sys 0m0.273s 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.326 22:51:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:53.326 ************************************ 00:10:53.326 END TEST accel_decomp_full_mthread 00:10:53.326 ************************************ 00:10:53.326 22:51:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:53.326 22:51:29 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:53.326 22:51:29 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:53.327 22:51:29 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:53.327 22:51:29 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:53.327 22:51:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:53.327 22:51:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.327 22:51:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:53.327 22:51:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.327 22:51:29 accel -- common/autotest_common.sh@10 -- # set +x 00:10:53.327 22:51:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.327 22:51:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:53.327 22:51:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:53.327 22:51:29 accel -- accel/accel.sh@41 -- # jq -r . 00:10:53.327 ************************************ 00:10:53.327 START TEST accel_dif_functional_tests 00:10:53.327 ************************************ 00:10:53.327 22:51:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:53.327 [2024-07-22 22:51:29.529597] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:53.327 [2024-07-22 22:51:29.529697] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762371 ] 00:10:53.327 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.327 [2024-07-22 22:51:29.629503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.596 [2024-07-22 22:51:29.793115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.596 [2024-07-22 22:51:29.793179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.596 [2024-07-22 22:51:29.793185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.860 00:10:53.860 00:10:53.860 CUnit - A unit testing framework for C - Version 2.1-3 00:10:53.860 http://cunit.sourceforge.net/ 00:10:53.860 00:10:53.860 00:10:53.860 Suite: accel_dif 00:10:53.860 Test: verify: DIF generated, GUARD check ...passed 00:10:53.860 Test: verify: DIF generated, APPTAG check ...passed 00:10:53.860 Test: verify: DIF generated, REFTAG check ...passed 00:10:53.860 Test: verify: DIF not generated, GUARD check ...[2024-07-22 22:51:29.909124] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:53.860 passed 00:10:53.860 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 22:51:29.909215] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:53.860 passed 00:10:53.860 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 22:51:29.909261] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:53.860 passed 00:10:53.860 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:53.860 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 22:51:29.909367] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:53.860 passed 00:10:53.860 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:53.860 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:53.860 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:53.860 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 22:51:29.909562] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:53.860 passed 00:10:53.860 Test: verify copy: DIF generated, GUARD check ...passed 00:10:53.860 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:53.860 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:53.860 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 22:51:29.909797] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:53.860 passed 00:10:53.860 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 22:51:29.909875] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:53.860 passed 00:10:53.860 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 22:51:29.909927] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:53.860 passed 00:10:53.860 Test: generate copy: DIF generated, GUARD check ...passed 00:10:53.860 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:53.860 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:53.860 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:53.860 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:53.860 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:53.860 Test: generate copy: iovecs-len validate ...[2024-07-22 22:51:29.910238] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:53.860 passed 00:10:53.860 Test: generate copy: buffer alignment validate ...passed 00:10:53.860 00:10:53.860 Run Summary: Type Total Ran Passed Failed Inactive 00:10:53.860 suites 1 1 n/a 0 0 00:10:53.860 tests 26 26 26 0 0 00:10:53.860 asserts 115 115 115 0 n/a 00:10:53.860 00:10:53.860 Elapsed time = 0.005 seconds 00:10:54.120 00:10:54.120 real 0m0.720s 00:10:54.120 user 0m1.046s 00:10:54.120 sys 0m0.266s 00:10:54.120 22:51:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.120 22:51:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:54.120 ************************************ 00:10:54.120 END TEST accel_dif_functional_tests 00:10:54.120 ************************************ 00:10:54.120 22:51:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:54.120 00:10:54.120 real 0m39.360s 00:10:54.120 user 0m40.241s 00:10:54.120 sys 0m7.973s 00:10:54.120 22:51:30 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.120 22:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:54.120 ************************************ 00:10:54.120 END TEST accel 00:10:54.120 ************************************ 00:10:54.120 22:51:30 -- common/autotest_common.sh@1142 -- # return 0 00:10:54.120 22:51:30 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:54.120 22:51:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:54.120 22:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.120 22:51:30 -- common/autotest_common.sh@10 -- # set +x 00:10:54.120 ************************************ 00:10:54.120 START TEST accel_rpc 00:10:54.120 ************************************ 00:10:54.120 22:51:30 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:54.120 * Looking for test storage... 00:10:54.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:10:54.120 22:51:30 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:54.120 22:51:30 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=762447 00:10:54.120 22:51:30 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:54.120 22:51:30 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 762447 00:10:54.120 22:51:30 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 762447 ']' 00:10:54.120 22:51:30 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.120 22:51:30 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.120 22:51:30 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.120 22:51:30 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.120 22:51:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.380 [2024-07-22 22:51:30.528073] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:54.380 [2024-07-22 22:51:30.528247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762447 ] 00:10:54.380 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.380 [2024-07-22 22:51:30.654505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.640 [2024-07-22 22:51:30.808425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.900 22:51:30 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.900 22:51:30 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:54.900 22:51:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:54.900 22:51:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:54.900 22:51:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:54.900 22:51:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:54.900 22:51:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:54.900 22:51:30 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:54.900 22:51:30 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.900 22:51:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.900 ************************************ 00:10:54.900 START TEST accel_assign_opcode 00:10:54.900 ************************************ 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:54.900 [2024-07-22 22:51:31.025902] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:54.900 [2024-07-22 22:51:31.033905] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.900 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:55.160 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.160 22:51:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:55.160 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.160 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:55.160 22:51:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:55.160 22:51:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:55.160 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.419 software 00:10:55.419 00:10:55.419 real 0m0.502s 00:10:55.419 user 0m0.083s 00:10:55.419 sys 0m0.017s 00:10:55.419 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.419 22:51:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:55.419 ************************************ 00:10:55.419 END TEST accel_assign_opcode 00:10:55.419 ************************************ 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:55.419 22:51:31 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 762447 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 762447 ']' 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 762447 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 762447 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 762447' 00:10:55.419 killing process with pid 762447 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@967 -- # kill 762447 00:10:55.419 22:51:31 accel_rpc -- common/autotest_common.sh@972 -- # wait 762447 00:10:55.987 00:10:55.987 real 0m1.833s 00:10:55.987 user 0m1.834s 00:10:55.987 sys 0m0.744s 00:10:55.987 22:51:32 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.987 22:51:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.987 ************************************ 00:10:55.987 END TEST accel_rpc 00:10:55.987 ************************************ 00:10:55.987 22:51:32 -- common/autotest_common.sh@1142 -- # return 0 00:10:55.987 22:51:32 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:55.987 22:51:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.987 22:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.987 22:51:32 -- common/autotest_common.sh@10 -- # set +x 00:10:55.987 ************************************ 00:10:55.987 START TEST app_cmdline 00:10:55.987 ************************************ 00:10:55.987 22:51:32 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:55.987 * Looking for test storage... 00:10:55.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:55.988 22:51:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:55.988 22:51:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=762779 00:10:55.988 22:51:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:55.988 22:51:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 762779 00:10:55.988 22:51:32 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 762779 ']' 00:10:55.988 22:51:32 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.988 22:51:32 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.988 22:51:32 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.988 22:51:32 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.988 22:51:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:56.248 [2024-07-22 22:51:32.403542] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:56.248 [2024-07-22 22:51:32.403731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762779 ] 00:10:56.248 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.248 [2024-07-22 22:51:32.540360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.508 [2024-07-22 22:51:32.693495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.076 22:51:33 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.076 22:51:33 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:10:57.076 22:51:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:57.076 { 00:10:57.076 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:10:57.076 "fields": { 00:10:57.076 "major": 24, 00:10:57.076 "minor": 9, 00:10:57.076 "patch": 0, 00:10:57.076 "suffix": "-pre", 00:10:57.076 "commit": "f7b31b2b9" 00:10:57.076 } 00:10:57.076 } 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:57.334 22:51:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:57.334 22:51:33 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:57.905 request: 00:10:57.905 { 00:10:57.905 "method": "env_dpdk_get_mem_stats", 00:10:57.905 "req_id": 1 00:10:57.905 } 00:10:57.905 Got JSON-RPC error response 00:10:57.905 response: 00:10:57.905 { 00:10:57.905 "code": -32601, 00:10:57.905 "message": "Method not found" 00:10:57.905 } 00:10:57.905 22:51:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:10:57.905 22:51:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:57.906 22:51:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 762779 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 762779 ']' 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 762779 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 762779 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 762779' 00:10:57.906 killing process with pid 762779 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@967 -- # kill 762779 00:10:57.906 22:51:34 app_cmdline -- common/autotest_common.sh@972 -- # wait 762779 00:10:58.474 00:10:58.474 real 0m2.496s 00:10:58.474 user 0m3.261s 00:10:58.474 sys 0m0.816s 00:10:58.474 22:51:34 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.474 22:51:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:58.474 ************************************ 00:10:58.474 END TEST app_cmdline 00:10:58.474 ************************************ 00:10:58.474 22:51:34 -- common/autotest_common.sh@1142 -- # return 0 00:10:58.474 22:51:34 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:58.474 22:51:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.474 22:51:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.474 22:51:34 -- common/autotest_common.sh@10 -- # set +x 00:10:58.735 ************************************ 00:10:58.735 START TEST version 00:10:58.735 ************************************ 00:10:58.735 22:51:34 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:58.735 * Looking for test storage... 00:10:58.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:58.735 22:51:34 version -- app/version.sh@17 -- # get_header_version major 00:10:58.735 22:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # cut -f2 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:10:58.735 22:51:34 version -- app/version.sh@17 -- # major=24 00:10:58.735 22:51:34 version -- app/version.sh@18 -- # get_header_version minor 00:10:58.735 22:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # cut -f2 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:10:58.735 22:51:34 version -- app/version.sh@18 -- # minor=9 00:10:58.735 22:51:34 version -- app/version.sh@19 -- # get_header_version patch 00:10:58.735 22:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # cut -f2 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:10:58.735 22:51:34 version -- app/version.sh@19 -- # patch=0 00:10:58.735 22:51:34 version -- app/version.sh@20 -- # get_header_version suffix 00:10:58.735 22:51:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # cut -f2 00:10:58.735 22:51:34 version -- app/version.sh@14 -- # tr -d '"' 00:10:58.735 22:51:34 version -- app/version.sh@20 -- # suffix=-pre 00:10:58.735 22:51:34 version -- app/version.sh@22 -- # version=24.9 00:10:58.735 22:51:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:58.735 22:51:34 version -- app/version.sh@28 -- # version=24.9rc0 00:10:58.735 22:51:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:58.735 22:51:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:58.735 22:51:35 version -- app/version.sh@30 -- # py_version=24.9rc0 00:10:58.735 22:51:35 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:10:58.735 00:10:58.735 real 0m0.209s 00:10:58.735 user 0m0.115s 00:10:58.735 sys 0m0.130s 00:10:58.735 22:51:35 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.735 22:51:35 version -- common/autotest_common.sh@10 -- # set +x 00:10:58.735 ************************************ 00:10:58.735 END TEST version 00:10:58.735 ************************************ 00:10:58.735 22:51:35 -- common/autotest_common.sh@1142 -- # return 0 00:10:58.735 22:51:35 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:10:58.735 22:51:35 -- spdk/autotest.sh@198 -- # uname -s 00:10:58.735 22:51:35 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:10:58.735 22:51:35 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:10:58.735 22:51:35 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:10:58.735 22:51:35 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:10:58.735 22:51:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:58.736 22:51:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:58.736 22:51:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:58.736 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.995 22:51:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:58.995 22:51:35 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:58.995 22:51:35 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:10:58.995 22:51:35 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:10:58.995 22:51:35 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:10:58.995 22:51:35 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:10:58.995 22:51:35 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:58.995 22:51:35 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.995 22:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.995 22:51:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.995 ************************************ 00:10:58.995 START TEST nvmf_tcp 00:10:58.995 ************************************ 00:10:58.995 22:51:35 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:58.995 * Looking for test storage... 00:10:58.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:58.995 22:51:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:58.995 22:51:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:58.995 22:51:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:58.995 22:51:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.995 22:51:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.995 22:51:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.995 ************************************ 00:10:58.995 START TEST nvmf_target_core 00:10:58.995 ************************************ 00:10:58.995 22:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:59.256 * Looking for test storage... 00:10:59.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.256 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.257 ************************************ 00:10:59.257 START TEST nvmf_abort 00:10:59.257 ************************************ 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:59.257 * Looking for test storage... 00:10:59.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.257 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.258 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.258 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.258 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.258 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.258 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.258 22:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.552 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:02.553 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:02.553 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:02.553 Found net devices under 0000:84:00.0: cvl_0_0 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:02.553 Found net devices under 0000:84:00.1: cvl_0_1 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:11:02.553 00:11:02.553 --- 10.0.0.2 ping statistics --- 00:11:02.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.553 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:11:02.553 00:11:02.553 --- 10.0.0.1 ping statistics --- 00:11:02.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.553 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=764990 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 764990 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 764990 ']' 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.553 22:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.812 [2024-07-22 22:51:38.936268] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:11:02.813 [2024-07-22 22:51:38.936458] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.813 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.813 [2024-07-22 22:51:39.070436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.071 [2024-07-22 22:51:39.185567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.071 [2024-07-22 22:51:39.185636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.071 [2024-07-22 22:51:39.185655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.071 [2024-07-22 22:51:39.185672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.071 [2024-07-22 22:51:39.185691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.071 [2024-07-22 22:51:39.186080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.071 [2024-07-22 22:51:39.186143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.071 [2024-07-22 22:51:39.186147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.008 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.008 [2024-07-22 22:51:40.302393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 Malloc0 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 Delay0 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 [2024-07-22 22:51:40.378949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 22:51:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:04.268 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.268 [2024-07-22 22:51:40.514825] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:06.811 Initializing NVMe Controllers 00:11:06.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:06.811 controller IO queue size 128 less than required 00:11:06.811 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:06.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:06.811 Initialization complete. Launching workers. 00:11:06.811 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 24281 00:11:06.811 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24346, failed to submit 62 00:11:06.811 success 24285, unsuccess 61, failed 0 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.811 rmmod nvme_tcp 00:11:06.811 rmmod nvme_fabrics 00:11:06.811 rmmod nvme_keyring 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 764990 ']' 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 764990 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 764990 ']' 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 764990 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 764990 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 764990' 00:11:06.811 killing process with pid 764990 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 764990 00:11:06.811 22:51:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 764990 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.811 22:51:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.351 00:11:09.351 real 0m9.716s 00:11:09.351 user 0m14.527s 00:11:09.351 sys 0m3.882s 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:09.351 ************************************ 00:11:09.351 END TEST nvmf_abort 00:11:09.351 ************************************ 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.351 ************************************ 00:11:09.351 START TEST nvmf_ns_hotplug_stress 00:11:09.351 ************************************ 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:09.351 * Looking for test storage... 00:11:09.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.351 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.352 22:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.657 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:12.658 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:12.658 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:12.658 Found net devices under 0000:84:00.0: cvl_0_0 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:12.658 Found net devices under 0000:84:00.1: cvl_0_1 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:11:12.658 00:11:12.658 --- 10.0.0.2 ping statistics --- 00:11:12.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.658 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:12.658 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:11:12.659 00:11:12.659 --- 10.0.0.1 ping statistics --- 00:11:12.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.659 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=767605 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 767605 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 767605 ']' 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.659 22:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.659 [2024-07-22 22:51:48.929115] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:11:12.659 [2024-07-22 22:51:48.929280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.918 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.918 [2024-07-22 22:51:49.053969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:12.918 [2024-07-22 22:51:49.164637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.918 [2024-07-22 22:51:49.164707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.918 [2024-07-22 22:51:49.164726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.918 [2024-07-22 22:51:49.164742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.918 [2024-07-22 22:51:49.164757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.918 [2024-07-22 22:51:49.164835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.918 [2024-07-22 22:51:49.164898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.918 [2024-07-22 22:51:49.164904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:13.177 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:13.743 [2024-07-22 22:51:49.847642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.743 22:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:14.311 22:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.885 [2024-07-22 22:51:51.109473] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.885 22:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.454 22:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:16.020 Malloc0 00:11:16.020 22:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:16.586 Delay0 00:11:16.586 22:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.151 22:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:17.718 NULL1 00:11:17.718 22:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:18.284 22:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=768180 00:11:18.284 22:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:18.284 22:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:18.284 22:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.284 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.218 Read completed with error (sct=0, sc=11) 00:11:19.218 22:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.993 22:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:19.993 22:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:20.559 true 00:11:20.559 22:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:20.559 22:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.934 22:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.500 22:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:22.500 22:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:22.758 true 00:11:22.758 22:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:22.758 22:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.133 22:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.680 22:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:24.680 22:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:24.938 true 00:11:24.938 22:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:24.938 22:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.503 22:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.277 22:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:26.277 22:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:26.844 true 00:11:26.844 22:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:26.844 22:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.218 22:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.735 22:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:28.735 22:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:29.301 true 00:11:29.301 22:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:29.301 22:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.689 22:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.951 22:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:30.951 22:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:31.517 true 00:11:31.517 22:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:31.517 22:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.892 22:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:33.458 22:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:33.458 22:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:34.026 true 00:11:34.026 22:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:34.026 22:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.961 22:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:34.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.737 22:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:35.737 22:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:36.303 true 00:11:36.303 22:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:36.303 22:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.678 22:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.244 22:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:38.244 22:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:38.811 true 00:11:38.811 22:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:38.811 22:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.745 22:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.312 22:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:40.312 22:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:40.879 true 00:11:40.879 22:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:40.879 22:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.836 22:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.401 22:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:42.401 22:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:42.659 true 00:11:42.659 22:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:42.659 22:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.032 22:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.548 22:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:44.548 22:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:44.806 true 00:11:44.806 22:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:44.806 22:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.180 22:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.697 22:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:46.697 22:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:47.262 true 00:11:47.263 22:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:47.263 22:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.829 22:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.346 22:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:48.346 22:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:48.346 Initializing NVMe Controllers 00:11:48.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.346 Controller IO queue size 128, less than required. 00:11:48.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:48.346 Controller IO queue size 128, less than required. 00:11:48.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:48.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:48.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:48.346 Initialization complete. Launching workers. 00:11:48.346 ======================================================== 00:11:48.346 Latency(us) 00:11:48.346 Device Information : IOPS MiB/s Average min max 00:11:48.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2826.63 1.38 33583.76 3419.75 2017141.21 00:11:48.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11196.88 5.47 11431.29 4326.90 742834.69 00:11:48.346 ======================================================== 00:11:48.346 Total : 14023.51 6.85 15896.43 3419.75 2017141.21 00:11:48.346 00:11:48.911 true 00:11:48.911 22:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 768180 00:11:48.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (768180) - No such process 00:11:48.911 22:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 768180 00:11:48.911 22:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.844 22:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:50.409 22:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:50.409 22:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:50.409 22:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:50.409 22:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:50.409 22:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:50.974 null0 00:11:50.974 22:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:50.974 22:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:50.974 22:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:51.232 null1 00:11:51.490 22:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:51.490 22:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:51.490 22:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:52.055 null2 00:11:52.055 22:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:52.055 22:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:52.055 22:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:52.620 null3 00:11:52.620 22:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:52.620 22:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:52.620 22:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:52.878 null4 00:11:52.878 22:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:52.878 22:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:52.878 22:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:53.444 null5 00:11:53.444 22:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:53.444 22:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:53.444 22:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:54.010 null6 00:11:54.271 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.271 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.271 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:54.838 null7 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 772318 772319 772320 772322 772324 772327 772328 772331 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:54.838 22:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:55.097 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:55.097 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:55.097 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.356 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:55.356 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:55.356 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:55.356 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.356 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.615 22:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:55.874 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.874 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:55.874 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.133 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.392 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.651 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.651 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.651 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.651 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.651 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.651 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.910 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.910 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.910 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.170 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.170 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.170 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.170 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.170 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.170 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.429 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.744 22:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.020 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.278 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.278 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.278 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.278 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.278 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.279 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.279 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.279 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.279 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.279 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.279 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.537 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.796 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.796 22:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.796 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.055 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.055 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.055 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.055 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.055 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.055 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.313 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.314 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.314 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.314 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.314 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.314 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.572 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.572 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.572 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.573 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.831 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.831 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.831 22:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.831 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.831 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.832 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.090 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.091 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.091 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.091 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.091 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.349 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.608 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.608 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.608 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.608 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.608 22:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.867 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.126 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.385 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:01.385 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.385 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.385 22:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.951 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.951 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.951 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.951 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.951 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.518 rmmod nvme_tcp 00:12:02.518 rmmod nvme_fabrics 00:12:02.518 rmmod nvme_keyring 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 767605 ']' 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 767605 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 767605 ']' 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 767605 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 767605 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 767605' 00:12:02.518 killing process with pid 767605 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 767605 00:12:02.518 22:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 767605 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.778 22:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.320 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:05.320 00:12:05.320 real 0m55.912s 00:12:05.320 user 4m10.804s 00:12:05.320 sys 0m20.060s 00:12:05.320 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.320 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.320 ************************************ 00:12:05.320 END TEST nvmf_ns_hotplug_stress 00:12:05.321 ************************************ 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:05.321 ************************************ 00:12:05.321 START TEST nvmf_delete_subsystem 00:12:05.321 ************************************ 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:05.321 * Looking for test storage... 00:12:05.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:05.321 22:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:08.615 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:08.615 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.615 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:08.616 Found net devices under 0000:84:00.0: cvl_0_0 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:08.616 Found net devices under 0000:84:00.1: cvl_0_1 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:12:08.616 00:12:08.616 --- 10.0.0.2 ping statistics --- 00:12:08.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.616 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:12:08.616 00:12:08.616 --- 10.0.0.1 ping statistics --- 00:12:08.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.616 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=775482 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 775482 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 775482 ']' 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.616 22:52:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.616 [2024-07-22 22:52:44.636154] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:12:08.616 [2024-07-22 22:52:44.636258] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.616 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.616 [2024-07-22 22:52:44.750535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:08.616 [2024-07-22 22:52:44.903086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.616 [2024-07-22 22:52:44.903193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.616 [2024-07-22 22:52:44.903229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.616 [2024-07-22 22:52:44.903260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.616 [2024-07-22 22:52:44.903286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.616 [2024-07-22 22:52:44.903417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.616 [2024-07-22 22:52:44.903426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 [2024-07-22 22:52:45.083697] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 [2024-07-22 22:52:45.100699] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 NULL1 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 Delay0 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=775511 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:08.876 22:52:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:08.876 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.135 [2024-07-22 22:52:45.214852] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:11.036 22:52:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.036 22:52:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.036 22:52:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 [2024-07-22 22:52:47.322473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6e50 is same with the state(5) to be set 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 starting I/O failed: -6 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.036 Write completed with error (sct=0, sc=8) 00:12:11.036 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 Write completed with error (sct=0, sc=8) 00:12:11.037 starting I/O failed: -6 00:12:11.037 Read completed with error (sct=0, sc=8) 00:12:11.037 [2024-07-22 22:52:47.324249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f39dc000c00 is same with the state(5) to be set 00:12:11.037 starting I/O failed: -6 00:12:11.037 starting I/O failed: -6 00:12:11.037 starting I/O failed: -6 00:12:11.037 starting I/O failed: -6 00:12:11.973 [2024-07-22 22:52:48.272025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4a30 is same with the state(5) to be set 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 [2024-07-22 22:52:48.324890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6970 is same with the state(5) to be set 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Read completed with error (sct=0, sc=8) 00:12:12.231 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 [2024-07-22 22:52:48.325159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7180 is same with the state(5) to be set 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 [2024-07-22 22:52:48.325550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f39dc00d7c0 is same with the state(5) to be set 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Write completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 Read completed with error (sct=0, sc=8) 00:12:12.232 [2024-07-22 22:52:48.326212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f39dc00d000 is same with the state(5) to be set 00:12:12.232 Initializing NVMe Controllers 00:12:12.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:12.232 Controller IO queue size 128, less than required. 00:12:12.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:12.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:12.232 Initialization complete. Launching workers. 00:12:12.232 ======================================================== 00:12:12.232 Latency(us) 00:12:12.232 Device Information : IOPS MiB/s Average min max 00:12:12.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.47 0.08 901091.39 999.51 1016214.90 00:12:12.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 188.28 0.09 901164.22 897.28 1016388.27 00:12:12.232 ======================================================== 00:12:12.232 Total : 356.75 0.17 901129.82 897.28 1016388.27 00:12:12.232 00:12:12.232 [2024-07-22 22:52:48.327177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c4a30 (9): Bad file descriptor 00:12:12.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:12.232 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.232 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:12.232 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 775511 00:12:12.232 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 775511 00:12:12.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (775511) - No such process 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 775511 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 775511 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 775511 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 [2024-07-22 22:52:48.852249] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=775911 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:12.799 22:52:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:12.799 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.799 [2024-07-22 22:52:48.944945] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:13.060 22:52:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.061 22:52:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:13.061 22:52:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.627 22:52:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.627 22:52:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:13.627 22:52:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.193 22:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:14.193 22:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:14.193 22:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.759 22:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:14.759 22:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:14.759 22:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.324 22:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.324 22:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:15.324 22:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.582 22:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.582 22:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:15.582 22:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.840 Initializing NVMe Controllers 00:12:15.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:15.840 Controller IO queue size 128, less than required. 00:12:15.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:15.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:15.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:15.840 Initialization complete. Launching workers. 00:12:15.840 ======================================================== 00:12:15.840 Latency(us) 00:12:15.840 Device Information : IOPS MiB/s Average min max 00:12:15.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005813.05 1000302.71 1016026.26 00:12:15.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006240.22 1000226.36 1017252.62 00:12:15.840 ======================================================== 00:12:15.840 Total : 256.00 0.12 1006026.63 1000226.36 1017252.62 00:12:15.840 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 775911 00:12:16.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (775911) - No such process 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 775911 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.120 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.120 rmmod nvme_tcp 00:12:16.386 rmmod nvme_fabrics 00:12:16.386 rmmod nvme_keyring 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 775482 ']' 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 775482 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 775482 ']' 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 775482 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775482 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775482' 00:12:16.386 killing process with pid 775482 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 775482 00:12:16.386 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 775482 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.645 22:52:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.196 00:12:19.196 real 0m13.676s 00:12:19.196 user 0m28.400s 00:12:19.196 sys 0m3.967s 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.196 ************************************ 00:12:19.196 END TEST nvmf_delete_subsystem 00:12:19.196 ************************************ 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:19.196 ************************************ 00:12:19.196 START TEST nvmf_host_management 00:12:19.196 ************************************ 00:12:19.196 22:52:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:19.196 * Looking for test storage... 00:12:19.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.196 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.197 22:52:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:22.492 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:22.493 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:22.493 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:22.493 Found net devices under 0000:84:00.0: cvl_0_0 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:22.493 Found net devices under 0000:84:00.1: cvl_0_1 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:12:22.493 00:12:22.493 --- 10.0.0.2 ping statistics --- 00:12:22.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.493 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:12:22.493 00:12:22.493 --- 10.0.0.1 ping statistics --- 00:12:22.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.493 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:22.493 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=778399 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 778399 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 778399 ']' 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.494 22:52:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.494 [2024-07-22 22:52:58.718475] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:12:22.494 [2024-07-22 22:52:58.718638] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.494 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.752 [2024-07-22 22:52:58.844307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.752 [2024-07-22 22:52:58.961154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.752 [2024-07-22 22:52:58.961220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.752 [2024-07-22 22:52:58.961240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.752 [2024-07-22 22:52:58.961256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.752 [2024-07-22 22:52:58.961271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.752 [2024-07-22 22:52:58.961390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.752 [2024-07-22 22:52:58.961451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.752 [2024-07-22 22:52:58.961757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:22.752 [2024-07-22 22:52:58.961763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.687 [2024-07-22 22:52:59.809893] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.687 Malloc0 00:12:23.687 [2024-07-22 22:52:59.884951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=778576 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 778576 /var/tmp/bdevperf.sock 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 778576 ']' 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:23.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.687 { 00:12:23.687 "params": { 00:12:23.687 "name": "Nvme$subsystem", 00:12:23.687 "trtype": "$TEST_TRANSPORT", 00:12:23.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.687 "adrfam": "ipv4", 00:12:23.687 "trsvcid": "$NVMF_PORT", 00:12:23.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.687 "hdgst": ${hdgst:-false}, 00:12:23.687 "ddgst": ${ddgst:-false} 00:12:23.687 }, 00:12:23.687 "method": "bdev_nvme_attach_controller" 00:12:23.687 } 00:12:23.687 EOF 00:12:23.687 )") 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:23.687 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:23.688 22:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.688 "params": { 00:12:23.688 "name": "Nvme0", 00:12:23.688 "trtype": "tcp", 00:12:23.688 "traddr": "10.0.0.2", 00:12:23.688 "adrfam": "ipv4", 00:12:23.688 "trsvcid": "4420", 00:12:23.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:23.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:23.688 "hdgst": false, 00:12:23.688 "ddgst": false 00:12:23.688 }, 00:12:23.688 "method": "bdev_nvme_attach_controller" 00:12:23.688 }' 00:12:23.946 [2024-07-22 22:53:00.013257] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:12:23.946 [2024-07-22 22:53:00.013431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778576 ] 00:12:23.946 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.946 [2024-07-22 22:53:00.121778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.946 [2024-07-22 22:53:00.232059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.205 Running I/O for 10 seconds... 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:24.464 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.724 22:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 22:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.724 22:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:24.724 22:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.724 22:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 [2024-07-22 22:53:01.011103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.724 [2024-07-22 22:53:01.011171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.011197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.724 [2024-07-22 22:53:01.011216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.011236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.724 [2024-07-22 22:53:01.011255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.011275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.724 [2024-07-22 22:53:01.011294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.011321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eb370 is same with the state(5) to be set 00:12:24.724 22:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.724 22:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:24.724 [2024-07-22 22:53:01.024940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eb370 (9): Bad file descriptor 00:12:24.724 [2024-07-22 22:53:01.025058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.724 [2024-07-22 22:53:01.025952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.724 [2024-07-22 22:53:01.025972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.025992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.026975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.026994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.725 [2024-07-22 22:53:01.027476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.725 [2024-07-22 22:53:01.027496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.726 [2024-07-22 22:53:01.027517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.726 [2024-07-22 22:53:01.027536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.726 [2024-07-22 22:53:01.027557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.726 [2024-07-22 22:53:01.027577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.726 [2024-07-22 22:53:01.027598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.726 [2024-07-22 22:53:01.027617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.726 [2024-07-22 22:53:01.027653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.726 [2024-07-22 22:53:01.027674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.726 [2024-07-22 22:53:01.027695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.726 [2024-07-22 22:53:01.027715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.726 [2024-07-22 22:53:01.027825] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19e5650 was disconnected and freed. reset controller. 00:12:24.726 [2024-07-22 22:53:01.029342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:24.726 task offset: 81920 on job bdev=Nvme0n1 fails 00:12:24.726 00:12:24.726 Latency(us) 00:12:24.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.726 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:24.726 Job: Nvme0n1 ended in about 0.55 seconds with error 00:12:24.726 Verification LBA range: start 0x0 length 0x400 00:12:24.726 Nvme0n1 : 0.55 1171.92 73.24 117.19 0.00 48225.92 3325.35 46991.74 00:12:24.726 =================================================================================================================== 00:12:24.726 Total : 1171.92 73.24 117.19 0.00 48225.92 3325.35 46991.74 00:12:24.726 [2024-07-22 22:53:01.031846] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:24.984 [2024-07-22 22:53:01.082136] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 778576 00:12:25.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (778576) - No such process 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:25.919 { 00:12:25.919 "params": { 00:12:25.919 "name": "Nvme$subsystem", 00:12:25.919 "trtype": "$TEST_TRANSPORT", 00:12:25.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:25.919 "adrfam": "ipv4", 00:12:25.919 "trsvcid": "$NVMF_PORT", 00:12:25.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:25.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:25.919 "hdgst": ${hdgst:-false}, 00:12:25.919 "ddgst": ${ddgst:-false} 00:12:25.919 }, 00:12:25.919 "method": "bdev_nvme_attach_controller" 00:12:25.919 } 00:12:25.919 EOF 00:12:25.919 )") 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:25.919 22:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:25.919 "params": { 00:12:25.919 "name": "Nvme0", 00:12:25.919 "trtype": "tcp", 00:12:25.919 "traddr": "10.0.0.2", 00:12:25.919 "adrfam": "ipv4", 00:12:25.919 "trsvcid": "4420", 00:12:25.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:25.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:25.919 "hdgst": false, 00:12:25.920 "ddgst": false 00:12:25.920 }, 00:12:25.920 "method": "bdev_nvme_attach_controller" 00:12:25.920 }' 00:12:25.920 [2024-07-22 22:53:02.077844] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:12:25.920 [2024-07-22 22:53:02.077938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778851 ] 00:12:25.920 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.920 [2024-07-22 22:53:02.191100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.178 [2024-07-22 22:53:02.301587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.437 Running I/O for 1 seconds... 00:12:27.371 00:12:27.371 Latency(us) 00:12:27.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.372 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:27.372 Verification LBA range: start 0x0 length 0x400 00:12:27.372 Nvme0n1 : 1.03 1217.58 76.10 0.00 0.00 51251.35 5267.15 45438.29 00:12:27.372 =================================================================================================================== 00:12:27.372 Total : 1217.58 76.10 0.00 0.00 51251.35 5267.15 45438.29 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.630 rmmod nvme_tcp 00:12:27.630 rmmod nvme_fabrics 00:12:27.630 rmmod nvme_keyring 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 778399 ']' 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 778399 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 778399 ']' 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 778399 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.630 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 778399 00:12:27.888 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:27.888 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:27.888 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 778399' 00:12:27.888 killing process with pid 778399 00:12:27.888 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 778399 00:12:27.888 22:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 778399 00:12:28.148 [2024-07-22 22:53:04.237527] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.148 22:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.057 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.057 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:30.057 00:12:30.057 real 0m11.386s 00:12:30.057 user 0m25.349s 00:12:30.057 sys 0m4.262s 00:12:30.057 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.057 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.058 ************************************ 00:12:30.058 END TEST nvmf_host_management 00:12:30.058 ************************************ 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:30.318 ************************************ 00:12:30.318 START TEST nvmf_lvol 00:12:30.318 ************************************ 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:30.318 * Looking for test storage... 00:12:30.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.318 22:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:33.614 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:33.614 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:33.614 Found net devices under 0000:84:00.0: cvl_0_0 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:33.614 Found net devices under 0000:84:00.1: cvl_0_1 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.614 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:12:33.615 00:12:33.615 --- 10.0.0.2 ping statistics --- 00:12:33.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.615 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:12:33.615 00:12:33.615 --- 10.0.0.1 ping statistics --- 00:12:33.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.615 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=781082 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 781082 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 781082 ']' 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.615 22:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:33.615 [2024-07-22 22:53:09.817394] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:12:33.615 [2024-07-22 22:53:09.817566] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.615 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.874 [2024-07-22 22:53:09.970364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.874 [2024-07-22 22:53:10.124646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.874 [2024-07-22 22:53:10.124749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.874 [2024-07-22 22:53:10.124786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.874 [2024-07-22 22:53:10.124816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.874 [2024-07-22 22:53:10.124842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.874 [2024-07-22 22:53:10.124978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.874 [2024-07-22 22:53:10.125039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.875 [2024-07-22 22:53:10.125042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.134 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.134 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:34.134 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.134 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.134 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:34.134 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.134 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:34.700 [2024-07-22 22:53:10.891386] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.700 22:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:35.267 22:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:35.267 22:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:35.834 22:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:35.834 22:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:36.428 22:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:37.002 22:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=dd0639a8-1cf7-4487-bc28-f0cf49778348 00:12:37.002 22:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dd0639a8-1cf7-4487-bc28-f0cf49778348 lvol 20 00:12:37.568 22:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=492a0320-1890-485f-b831-987f4ddc80c8 00:12:37.568 22:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:38.133 22:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 492a0320-1890-485f-b831-987f4ddc80c8 00:12:38.701 22:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:38.959 [2024-07-22 22:53:15.138481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.959 22:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:39.525 22:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=781821 00:12:39.525 22:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:39.525 22:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:39.525 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.901 22:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 492a0320-1890-485f-b831-987f4ddc80c8 MY_SNAPSHOT 00:12:41.160 22:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3ac4966a-d073-423f-9093-9bf5462c3578 00:12:41.160 22:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 492a0320-1890-485f-b831-987f4ddc80c8 30 00:12:42.100 22:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3ac4966a-d073-423f-9093-9bf5462c3578 MY_CLONE 00:12:42.665 22:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=777598fb-482c-4423-a508-83b00b8d7d57 00:12:42.665 22:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 777598fb-482c-4423-a508-83b00b8d7d57 00:12:43.599 22:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 781821 00:12:50.161 Initializing NVMe Controllers 00:12:50.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:50.161 Controller IO queue size 128, less than required. 00:12:50.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:50.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:50.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:50.161 Initialization complete. Launching workers. 00:12:50.161 ======================================================== 00:12:50.161 Latency(us) 00:12:50.161 Device Information : IOPS MiB/s Average min max 00:12:50.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8005.20 31.27 16000.05 2223.76 83413.38 00:12:50.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7902.50 30.87 16203.92 2628.28 70440.61 00:12:50.161 ======================================================== 00:12:50.161 Total : 15907.70 62.14 16101.33 2223.76 83413.38 00:12:50.161 00:12:50.161 22:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:50.727 22:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 492a0320-1890-485f-b831-987f4ddc80c8 00:12:51.294 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd0639a8-1cf7-4487-bc28-f0cf49778348 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.861 22:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.861 rmmod nvme_tcp 00:12:51.861 rmmod nvme_fabrics 00:12:51.861 rmmod nvme_keyring 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 781082 ']' 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 781082 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 781082 ']' 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 781082 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781082 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781082' 00:12:51.861 killing process with pid 781082 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 781082 00:12:51.861 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 781082 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.430 22:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.339 00:12:54.339 real 0m24.134s 00:12:54.339 user 1m20.946s 00:12:54.339 sys 0m7.497s 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:54.339 ************************************ 00:12:54.339 END TEST nvmf_lvol 00:12:54.339 ************************************ 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.339 ************************************ 00:12:54.339 START TEST nvmf_lvs_grow 00:12:54.339 ************************************ 00:12:54.339 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:54.600 * Looking for test storage... 00:12:54.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.600 22:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:57.902 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:57.902 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:57.902 Found net devices under 0000:84:00.0: cvl_0_0 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:57.902 Found net devices under 0000:84:00.1: cvl_0_1 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.902 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:12:57.903 00:12:57.903 --- 10.0.0.2 ping statistics --- 00:12:57.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.903 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:12:57.903 00:12:57.903 --- 10.0.0.1 ping statistics --- 00:12:57.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.903 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=785307 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 785307 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 785307 ']' 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.903 22:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 [2024-07-22 22:53:34.039726] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:12:57.903 [2024-07-22 22:53:34.039818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.903 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.903 [2024-07-22 22:53:34.116148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.163 [2024-07-22 22:53:34.261506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.163 [2024-07-22 22:53:34.261611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.163 [2024-07-22 22:53:34.261649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.163 [2024-07-22 22:53:34.261684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.163 [2024-07-22 22:53:34.261713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.163 [2024-07-22 22:53:34.261777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.134 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.134 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:59.134 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.134 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.134 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.134 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:59.703 [2024-07-22 22:53:35.841664] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.703 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:59.703 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:59.703 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 ************************************ 00:12:59.704 START TEST lvs_grow_clean 00:12:59.704 ************************************ 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.704 22:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:00.273 22:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:00.273 22:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:00.842 22:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=db7f7cd6-7935-452f-b325-2a455686c409 00:13:00.842 22:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:00.842 22:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:01.412 22:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:01.412 22:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:01.412 22:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db7f7cd6-7935-452f-b325-2a455686c409 lvol 150 00:13:01.982 22:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9b1fa8a-aa66-4409-9a34-5eeafdbd1441 00:13:01.982 22:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:01.982 22:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:02.551 [2024-07-22 22:53:38.756282] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:02.551 [2024-07-22 22:53:38.756468] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:02.551 true 00:13:02.551 22:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:02.551 22:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:03.120 22:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:03.121 22:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:03.690 22:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9b1fa8a-aa66-4409-9a34-5eeafdbd1441 00:13:04.259 22:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:04.830 [2024-07-22 22:53:41.036053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.830 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=786151 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 786151 /var/tmp/bdevperf.sock 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 786151 ']' 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:05.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.400 22:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:05.659 [2024-07-22 22:53:41.720283] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:05.659 [2024-07-22 22:53:41.720491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786151 ] 00:13:05.659 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.659 [2024-07-22 22:53:41.838547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.659 [2024-07-22 22:53:41.949176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.918 22:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.918 22:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:05.918 22:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:06.489 Nvme0n1 00:13:06.489 22:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:07.074 [ 00:13:07.074 { 00:13:07.074 "name": "Nvme0n1", 00:13:07.074 "aliases": [ 00:13:07.074 "e9b1fa8a-aa66-4409-9a34-5eeafdbd1441" 00:13:07.074 ], 00:13:07.074 "product_name": "NVMe disk", 00:13:07.074 "block_size": 4096, 00:13:07.074 "num_blocks": 38912, 00:13:07.074 "uuid": "e9b1fa8a-aa66-4409-9a34-5eeafdbd1441", 00:13:07.074 "assigned_rate_limits": { 00:13:07.074 "rw_ios_per_sec": 0, 00:13:07.074 "rw_mbytes_per_sec": 0, 00:13:07.074 "r_mbytes_per_sec": 0, 00:13:07.074 "w_mbytes_per_sec": 0 00:13:07.074 }, 00:13:07.074 "claimed": false, 00:13:07.074 "zoned": false, 00:13:07.074 "supported_io_types": { 00:13:07.074 "read": true, 00:13:07.074 "write": true, 00:13:07.074 "unmap": true, 00:13:07.074 "flush": true, 00:13:07.074 "reset": true, 00:13:07.074 "nvme_admin": true, 00:13:07.074 "nvme_io": true, 00:13:07.074 "nvme_io_md": false, 00:13:07.074 "write_zeroes": true, 00:13:07.074 "zcopy": false, 00:13:07.074 "get_zone_info": false, 00:13:07.074 "zone_management": false, 00:13:07.074 "zone_append": false, 00:13:07.074 "compare": true, 00:13:07.074 "compare_and_write": true, 00:13:07.074 "abort": true, 00:13:07.074 "seek_hole": false, 00:13:07.074 "seek_data": false, 00:13:07.074 "copy": true, 00:13:07.074 "nvme_iov_md": false 00:13:07.074 }, 00:13:07.074 "memory_domains": [ 00:13:07.074 { 00:13:07.074 "dma_device_id": "system", 00:13:07.074 "dma_device_type": 1 00:13:07.074 } 00:13:07.074 ], 00:13:07.074 "driver_specific": { 00:13:07.074 "nvme": [ 00:13:07.074 { 00:13:07.074 "trid": { 00:13:07.074 "trtype": "TCP", 00:13:07.074 "adrfam": "IPv4", 00:13:07.074 "traddr": "10.0.0.2", 00:13:07.074 "trsvcid": "4420", 00:13:07.074 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:07.074 }, 00:13:07.074 "ctrlr_data": { 00:13:07.074 "cntlid": 1, 00:13:07.074 "vendor_id": "0x8086", 00:13:07.074 "model_number": "SPDK bdev Controller", 00:13:07.074 "serial_number": "SPDK0", 00:13:07.074 "firmware_revision": "24.09", 00:13:07.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:07.074 "oacs": { 00:13:07.074 "security": 0, 00:13:07.074 "format": 0, 00:13:07.074 "firmware": 0, 00:13:07.074 "ns_manage": 0 00:13:07.074 }, 00:13:07.074 "multi_ctrlr": true, 00:13:07.074 "ana_reporting": false 00:13:07.074 }, 00:13:07.074 "vs": { 00:13:07.074 "nvme_version": "1.3" 00:13:07.074 }, 00:13:07.074 "ns_data": { 00:13:07.074 "id": 1, 00:13:07.074 "can_share": true 00:13:07.074 } 00:13:07.074 } 00:13:07.074 ], 00:13:07.074 "mp_policy": "active_passive" 00:13:07.074 } 00:13:07.074 } 00:13:07.074 ] 00:13:07.074 22:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=786296 00:13:07.074 22:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:07.074 22:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:07.333 Running I/O for 10 seconds... 00:13:08.711 Latency(us) 00:13:08.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.711 Nvme0n1 : 1.00 11558.00 45.15 0.00 0.00 0.00 0.00 0.00 00:13:08.711 =================================================================================================================== 00:13:08.711 Total : 11558.00 45.15 0.00 0.00 0.00 0.00 0.00 00:13:08.711 00:13:09.278 22:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:09.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.278 Nvme0n1 : 2.00 11684.50 45.64 0.00 0.00 0.00 0.00 0.00 00:13:09.278 =================================================================================================================== 00:13:09.278 Total : 11684.50 45.64 0.00 0.00 0.00 0.00 0.00 00:13:09.278 00:13:09.846 true 00:13:09.846 22:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:09.846 22:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:10.105 22:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:10.105 22:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:10.105 22:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 786296 00:13:10.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.365 Nvme0n1 : 3.00 11726.67 45.81 0.00 0.00 0.00 0.00 0.00 00:13:10.365 =================================================================================================================== 00:13:10.365 Total : 11726.67 45.81 0.00 0.00 0.00 0.00 0.00 00:13:10.365 00:13:11.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.302 Nvme0n1 : 4.00 11795.50 46.08 0.00 0.00 0.00 0.00 0.00 00:13:11.302 =================================================================================================================== 00:13:11.302 Total : 11795.50 46.08 0.00 0.00 0.00 0.00 0.00 00:13:11.302 00:13:12.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.678 Nvme0n1 : 5.00 11849.60 46.29 0.00 0.00 0.00 0.00 0.00 00:13:12.678 =================================================================================================================== 00:13:12.678 Total : 11849.60 46.29 0.00 0.00 0.00 0.00 0.00 00:13:12.678 00:13:13.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.615 Nvme0n1 : 6.00 11885.50 46.43 0.00 0.00 0.00 0.00 0.00 00:13:13.615 =================================================================================================================== 00:13:13.615 Total : 11885.50 46.43 0.00 0.00 0.00 0.00 0.00 00:13:13.615 00:13:14.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.552 Nvme0n1 : 7.00 11911.14 46.53 0.00 0.00 0.00 0.00 0.00 00:13:14.552 =================================================================================================================== 00:13:14.552 Total : 11911.14 46.53 0.00 0.00 0.00 0.00 0.00 00:13:14.552 00:13:15.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.489 Nvme0n1 : 8.00 11946.25 46.67 0.00 0.00 0.00 0.00 0.00 00:13:15.489 =================================================================================================================== 00:13:15.489 Total : 11946.25 46.67 0.00 0.00 0.00 0.00 0.00 00:13:15.489 00:13:16.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.465 Nvme0n1 : 9.00 11959.44 46.72 0.00 0.00 0.00 0.00 0.00 00:13:16.465 =================================================================================================================== 00:13:16.465 Total : 11959.44 46.72 0.00 0.00 0.00 0.00 0.00 00:13:16.465 00:13:17.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.401 Nvme0n1 : 10.00 11982.70 46.81 0.00 0.00 0.00 0.00 0.00 00:13:17.401 =================================================================================================================== 00:13:17.401 Total : 11982.70 46.81 0.00 0.00 0.00 0.00 0.00 00:13:17.401 00:13:17.401 00:13:17.401 Latency(us) 00:13:17.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.401 Nvme0n1 : 10.01 11988.49 46.83 0.00 0.00 10670.44 2815.62 21651.15 00:13:17.401 =================================================================================================================== 00:13:17.401 Total : 11988.49 46.83 0.00 0.00 10670.44 2815.62 21651.15 00:13:17.401 0 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 786151 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 786151 ']' 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 786151 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 786151 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 786151' 00:13:17.401 killing process with pid 786151 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 786151 00:13:17.401 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.401 00:13:17.401 Latency(us) 00:13:17.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.401 =================================================================================================================== 00:13:17.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.401 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 786151 00:13:17.661 22:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.230 22:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:18.490 22:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:18.490 22:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:19.060 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:19.060 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:19.060 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:19.630 [2024-07-22 22:53:55.753839] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:19.630 22:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:20.199 request: 00:13:20.199 { 00:13:20.199 "uuid": "db7f7cd6-7935-452f-b325-2a455686c409", 00:13:20.199 "method": "bdev_lvol_get_lvstores", 00:13:20.199 "req_id": 1 00:13:20.199 } 00:13:20.199 Got JSON-RPC error response 00:13:20.199 response: 00:13:20.199 { 00:13:20.199 "code": -19, 00:13:20.199 "message": "No such device" 00:13:20.199 } 00:13:20.199 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:20.199 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.199 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.199 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.199 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.769 aio_bdev 00:13:20.769 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9b1fa8a-aa66-4409-9a34-5eeafdbd1441 00:13:20.769 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=e9b1fa8a-aa66-4409-9a34-5eeafdbd1441 00:13:20.769 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:20.769 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:20.769 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:20.769 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:20.769 22:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:21.338 22:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9b1fa8a-aa66-4409-9a34-5eeafdbd1441 -t 2000 00:13:21.907 [ 00:13:21.907 { 00:13:21.907 "name": "e9b1fa8a-aa66-4409-9a34-5eeafdbd1441", 00:13:21.907 "aliases": [ 00:13:21.907 "lvs/lvol" 00:13:21.907 ], 00:13:21.907 "product_name": "Logical Volume", 00:13:21.907 "block_size": 4096, 00:13:21.907 "num_blocks": 38912, 00:13:21.907 "uuid": "e9b1fa8a-aa66-4409-9a34-5eeafdbd1441", 00:13:21.907 "assigned_rate_limits": { 00:13:21.907 "rw_ios_per_sec": 0, 00:13:21.907 "rw_mbytes_per_sec": 0, 00:13:21.907 "r_mbytes_per_sec": 0, 00:13:21.907 "w_mbytes_per_sec": 0 00:13:21.907 }, 00:13:21.907 "claimed": false, 00:13:21.907 "zoned": false, 00:13:21.907 "supported_io_types": { 00:13:21.907 "read": true, 00:13:21.907 "write": true, 00:13:21.907 "unmap": true, 00:13:21.907 "flush": false, 00:13:21.907 "reset": true, 00:13:21.907 "nvme_admin": false, 00:13:21.907 "nvme_io": false, 00:13:21.907 "nvme_io_md": false, 00:13:21.907 "write_zeroes": true, 00:13:21.907 "zcopy": false, 00:13:21.907 "get_zone_info": false, 00:13:21.907 "zone_management": false, 00:13:21.907 "zone_append": false, 00:13:21.907 "compare": false, 00:13:21.907 "compare_and_write": false, 00:13:21.907 "abort": false, 00:13:21.907 "seek_hole": true, 00:13:21.907 "seek_data": true, 00:13:21.907 "copy": false, 00:13:21.907 "nvme_iov_md": false 00:13:21.907 }, 00:13:21.907 "driver_specific": { 00:13:21.907 "lvol": { 00:13:21.907 "lvol_store_uuid": "db7f7cd6-7935-452f-b325-2a455686c409", 00:13:21.907 "base_bdev": "aio_bdev", 00:13:21.907 "thin_provision": false, 00:13:21.907 "num_allocated_clusters": 38, 00:13:21.907 "snapshot": false, 00:13:21.907 "clone": false, 00:13:21.907 "esnap_clone": false 00:13:21.907 } 00:13:21.907 } 00:13:21.907 } 00:13:21.907 ] 00:13:21.907 22:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:21.907 22:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:21.907 22:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:22.477 22:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:22.477 22:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:22.477 22:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:23.046 22:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:23.046 22:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9b1fa8a-aa66-4409-9a34-5eeafdbd1441 00:13:23.305 22:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db7f7cd6-7935-452f-b325-2a455686c409 00:13:23.881 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:24.140 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.401 00:13:24.401 real 0m24.539s 00:13:24.401 user 0m24.306s 00:13:24.401 sys 0m3.034s 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:24.401 ************************************ 00:13:24.401 END TEST lvs_grow_clean 00:13:24.401 ************************************ 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.401 ************************************ 00:13:24.401 START TEST lvs_grow_dirty 00:13:24.401 ************************************ 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.401 22:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:24.969 22:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:24.969 22:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:25.536 22:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:25.536 22:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:25.536 22:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:26.103 22:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:26.103 22:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:26.103 22:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 lvol 150 00:13:26.669 22:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d6c26b24-16f5-44c2-9417-75fc3623c83e 00:13:26.669 22:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:26.669 22:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:27.237 [2024-07-22 22:54:03.269392] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:27.237 [2024-07-22 22:54:03.269582] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:27.237 true 00:13:27.237 22:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:27.237 22:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:27.805 22:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:27.805 22:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:28.372 22:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6c26b24-16f5-44c2-9417-75fc3623c83e 00:13:28.940 22:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:29.510 [2024-07-22 22:54:05.681957] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.510 22:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=788975 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 788975 /var/tmp/bdevperf.sock 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 788975 ']' 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:29.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.770 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.038 [2024-07-22 22:54:06.125121] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:30.038 [2024-07-22 22:54:06.125325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788975 ] 00:13:30.038 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.038 [2024-07-22 22:54:06.244872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.296 [2024-07-22 22:54:06.357078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.296 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.296 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:30.296 22:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:31.231 Nvme0n1 00:13:31.231 22:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:31.798 [ 00:13:31.798 { 00:13:31.798 "name": "Nvme0n1", 00:13:31.798 "aliases": [ 00:13:31.798 "d6c26b24-16f5-44c2-9417-75fc3623c83e" 00:13:31.798 ], 00:13:31.798 "product_name": "NVMe disk", 00:13:31.798 "block_size": 4096, 00:13:31.798 "num_blocks": 38912, 00:13:31.798 "uuid": "d6c26b24-16f5-44c2-9417-75fc3623c83e", 00:13:31.798 "assigned_rate_limits": { 00:13:31.798 "rw_ios_per_sec": 0, 00:13:31.798 "rw_mbytes_per_sec": 0, 00:13:31.798 "r_mbytes_per_sec": 0, 00:13:31.798 "w_mbytes_per_sec": 0 00:13:31.798 }, 00:13:31.798 "claimed": false, 00:13:31.798 "zoned": false, 00:13:31.798 "supported_io_types": { 00:13:31.798 "read": true, 00:13:31.798 "write": true, 00:13:31.798 "unmap": true, 00:13:31.798 "flush": true, 00:13:31.798 "reset": true, 00:13:31.798 "nvme_admin": true, 00:13:31.798 "nvme_io": true, 00:13:31.798 "nvme_io_md": false, 00:13:31.798 "write_zeroes": true, 00:13:31.798 "zcopy": false, 00:13:31.798 "get_zone_info": false, 00:13:31.798 "zone_management": false, 00:13:31.798 "zone_append": false, 00:13:31.798 "compare": true, 00:13:31.798 "compare_and_write": true, 00:13:31.798 "abort": true, 00:13:31.798 "seek_hole": false, 00:13:31.798 "seek_data": false, 00:13:31.798 "copy": true, 00:13:31.798 "nvme_iov_md": false 00:13:31.798 }, 00:13:31.798 "memory_domains": [ 00:13:31.798 { 00:13:31.798 "dma_device_id": "system", 00:13:31.798 "dma_device_type": 1 00:13:31.798 } 00:13:31.798 ], 00:13:31.798 "driver_specific": { 00:13:31.798 "nvme": [ 00:13:31.798 { 00:13:31.798 "trid": { 00:13:31.798 "trtype": "TCP", 00:13:31.798 "adrfam": "IPv4", 00:13:31.798 "traddr": "10.0.0.2", 00:13:31.798 "trsvcid": "4420", 00:13:31.798 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:31.798 }, 00:13:31.798 "ctrlr_data": { 00:13:31.798 "cntlid": 1, 00:13:31.798 "vendor_id": "0x8086", 00:13:31.798 "model_number": "SPDK bdev Controller", 00:13:31.798 "serial_number": "SPDK0", 00:13:31.798 "firmware_revision": "24.09", 00:13:31.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:31.798 "oacs": { 00:13:31.798 "security": 0, 00:13:31.798 "format": 0, 00:13:31.798 "firmware": 0, 00:13:31.798 "ns_manage": 0 00:13:31.798 }, 00:13:31.798 "multi_ctrlr": true, 00:13:31.798 "ana_reporting": false 00:13:31.798 }, 00:13:31.798 "vs": { 00:13:31.798 "nvme_version": "1.3" 00:13:31.798 }, 00:13:31.798 "ns_data": { 00:13:31.798 "id": 1, 00:13:31.798 "can_share": true 00:13:31.798 } 00:13:31.798 } 00:13:31.798 ], 00:13:31.798 "mp_policy": "active_passive" 00:13:31.798 } 00:13:31.798 } 00:13:31.798 ] 00:13:31.798 22:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=789205 00:13:31.798 22:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:31.798 22:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.057 Running I/O for 10 seconds... 00:13:33.050 Latency(us) 00:13:33.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.050 Nvme0n1 : 1.00 11558.00 45.15 0.00 0.00 0.00 0.00 0.00 00:13:33.050 =================================================================================================================== 00:13:33.050 Total : 11558.00 45.15 0.00 0.00 0.00 0.00 0.00 00:13:33.050 00:13:33.619 22:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:33.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.878 Nvme0n1 : 2.00 11557.50 45.15 0.00 0.00 0.00 0.00 0.00 00:13:33.878 =================================================================================================================== 00:13:33.878 Total : 11557.50 45.15 0.00 0.00 0.00 0.00 0.00 00:13:33.878 00:13:34.137 true 00:13:34.137 22:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:34.137 22:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:34.705 22:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:34.705 22:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:34.705 22:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 789205 00:13:34.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.965 Nvme0n1 : 3.00 11642.00 45.48 0.00 0.00 0.00 0.00 0.00 00:13:34.965 =================================================================================================================== 00:13:34.965 Total : 11642.00 45.48 0.00 0.00 0.00 0.00 0.00 00:13:34.965 00:13:35.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.901 Nvme0n1 : 4.00 11716.00 45.77 0.00 0.00 0.00 0.00 0.00 00:13:35.901 =================================================================================================================== 00:13:35.901 Total : 11716.00 45.77 0.00 0.00 0.00 0.00 0.00 00:13:35.901 00:13:37.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.277 Nvme0n1 : 5.00 11785.80 46.04 0.00 0.00 0.00 0.00 0.00 00:13:37.277 =================================================================================================================== 00:13:37.277 Total : 11785.80 46.04 0.00 0.00 0.00 0.00 0.00 00:13:37.277 00:13:38.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.211 Nvme0n1 : 6.00 11832.33 46.22 0.00 0.00 0.00 0.00 0.00 00:13:38.212 =================================================================================================================== 00:13:38.212 Total : 11832.33 46.22 0.00 0.00 0.00 0.00 0.00 00:13:38.212 00:13:39.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.146 Nvme0n1 : 7.00 11865.57 46.35 0.00 0.00 0.00 0.00 0.00 00:13:39.146 =================================================================================================================== 00:13:39.146 Total : 11865.57 46.35 0.00 0.00 0.00 0.00 0.00 00:13:39.146 00:13:40.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.082 Nvme0n1 : 8.00 11906.38 46.51 0.00 0.00 0.00 0.00 0.00 00:13:40.082 =================================================================================================================== 00:13:40.082 Total : 11906.38 46.51 0.00 0.00 0.00 0.00 0.00 00:13:40.082 00:13:41.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.017 Nvme0n1 : 9.00 11938.11 46.63 0.00 0.00 0.00 0.00 0.00 00:13:41.017 =================================================================================================================== 00:13:41.017 Total : 11938.11 46.63 0.00 0.00 0.00 0.00 0.00 00:13:41.017 00:13:41.954 00:13:41.954 Latency(us) 00:13:41.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.954 Nvme0n1 : 10.00 11961.24 46.72 0.00 0.00 10694.83 2900.57 27379.48 00:13:41.954 =================================================================================================================== 00:13:41.954 Total : 11961.24 46.72 0.00 0.00 10694.83 2900.57 27379.48 00:13:41.954 0 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 788975 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 788975 ']' 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 788975 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 788975 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 788975' 00:13:41.954 killing process with pid 788975 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 788975 00:13:41.954 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.954 00:13:41.954 Latency(us) 00:13:41.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.954 =================================================================================================================== 00:13:41.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.954 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 788975 00:13:42.214 22:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:43.154 22:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:43.723 22:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:43.723 22:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 785307 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 785307 00:13:44.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 785307 Killed "${NVMF_APP[@]}" "$@" 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=791083 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 791083 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 791083 ']' 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.294 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.294 [2024-07-22 22:54:20.492988] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:44.294 [2024-07-22 22:54:20.493120] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.294 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.554 [2024-07-22 22:54:20.625660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.554 [2024-07-22 22:54:20.775409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.554 [2024-07-22 22:54:20.775489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.554 [2024-07-22 22:54:20.775509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.554 [2024-07-22 22:54:20.775525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.554 [2024-07-22 22:54:20.775540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.554 [2024-07-22 22:54:20.775585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.814 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.814 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:44.814 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.814 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.814 22:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.814 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.814 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.384 [2024-07-22 22:54:21.543271] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:45.384 [2024-07-22 22:54:21.543520] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:45.384 [2024-07-22 22:54:21.543591] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d6c26b24-16f5-44c2-9417-75fc3623c83e 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d6c26b24-16f5-44c2-9417-75fc3623c83e 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:45.384 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:45.643 22:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6c26b24-16f5-44c2-9417-75fc3623c83e -t 2000 00:13:46.212 [ 00:13:46.213 { 00:13:46.213 "name": "d6c26b24-16f5-44c2-9417-75fc3623c83e", 00:13:46.213 "aliases": [ 00:13:46.213 "lvs/lvol" 00:13:46.213 ], 00:13:46.213 "product_name": "Logical Volume", 00:13:46.213 "block_size": 4096, 00:13:46.213 "num_blocks": 38912, 00:13:46.213 "uuid": "d6c26b24-16f5-44c2-9417-75fc3623c83e", 00:13:46.213 "assigned_rate_limits": { 00:13:46.213 "rw_ios_per_sec": 0, 00:13:46.213 "rw_mbytes_per_sec": 0, 00:13:46.213 "r_mbytes_per_sec": 0, 00:13:46.213 "w_mbytes_per_sec": 0 00:13:46.213 }, 00:13:46.213 "claimed": false, 00:13:46.213 "zoned": false, 00:13:46.213 "supported_io_types": { 00:13:46.213 "read": true, 00:13:46.213 "write": true, 00:13:46.213 "unmap": true, 00:13:46.213 "flush": false, 00:13:46.213 "reset": true, 00:13:46.213 "nvme_admin": false, 00:13:46.213 "nvme_io": false, 00:13:46.213 "nvme_io_md": false, 00:13:46.213 "write_zeroes": true, 00:13:46.213 "zcopy": false, 00:13:46.213 "get_zone_info": false, 00:13:46.213 "zone_management": false, 00:13:46.213 "zone_append": false, 00:13:46.213 "compare": false, 00:13:46.213 "compare_and_write": false, 00:13:46.213 "abort": false, 00:13:46.213 "seek_hole": true, 00:13:46.213 "seek_data": true, 00:13:46.213 "copy": false, 00:13:46.213 "nvme_iov_md": false 00:13:46.213 }, 00:13:46.213 "driver_specific": { 00:13:46.213 "lvol": { 00:13:46.213 "lvol_store_uuid": "dc170bff-9bc3-4d22-b34b-7035ba06eb20", 00:13:46.213 "base_bdev": "aio_bdev", 00:13:46.213 "thin_provision": false, 00:13:46.213 "num_allocated_clusters": 38, 00:13:46.213 "snapshot": false, 00:13:46.213 "clone": false, 00:13:46.213 "esnap_clone": false 00:13:46.213 } 00:13:46.213 } 00:13:46.213 } 00:13:46.213 ] 00:13:46.213 22:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:46.213 22:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:46.213 22:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:46.782 22:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:46.782 22:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:46.782 22:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:47.040 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:47.040 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:47.300 [2024-07-22 22:54:23.573857] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.560 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.561 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:47.561 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:48.129 request: 00:13:48.129 { 00:13:48.129 "uuid": "dc170bff-9bc3-4d22-b34b-7035ba06eb20", 00:13:48.129 "method": "bdev_lvol_get_lvstores", 00:13:48.129 "req_id": 1 00:13:48.129 } 00:13:48.129 Got JSON-RPC error response 00:13:48.129 response: 00:13:48.129 { 00:13:48.129 "code": -19, 00:13:48.129 "message": "No such device" 00:13:48.129 } 00:13:48.129 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:48.129 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:48.129 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:48.129 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:48.129 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:48.698 aio_bdev 00:13:48.698 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d6c26b24-16f5-44c2-9417-75fc3623c83e 00:13:48.698 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d6c26b24-16f5-44c2-9417-75fc3623c83e 00:13:48.698 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:48.698 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:48.698 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:48.699 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:48.699 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:49.268 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6c26b24-16f5-44c2-9417-75fc3623c83e -t 2000 00:13:49.527 [ 00:13:49.527 { 00:13:49.527 "name": "d6c26b24-16f5-44c2-9417-75fc3623c83e", 00:13:49.527 "aliases": [ 00:13:49.527 "lvs/lvol" 00:13:49.527 ], 00:13:49.527 "product_name": "Logical Volume", 00:13:49.527 "block_size": 4096, 00:13:49.527 "num_blocks": 38912, 00:13:49.527 "uuid": "d6c26b24-16f5-44c2-9417-75fc3623c83e", 00:13:49.527 "assigned_rate_limits": { 00:13:49.527 "rw_ios_per_sec": 0, 00:13:49.527 "rw_mbytes_per_sec": 0, 00:13:49.527 "r_mbytes_per_sec": 0, 00:13:49.527 "w_mbytes_per_sec": 0 00:13:49.527 }, 00:13:49.527 "claimed": false, 00:13:49.527 "zoned": false, 00:13:49.527 "supported_io_types": { 00:13:49.527 "read": true, 00:13:49.527 "write": true, 00:13:49.527 "unmap": true, 00:13:49.527 "flush": false, 00:13:49.527 "reset": true, 00:13:49.527 "nvme_admin": false, 00:13:49.527 "nvme_io": false, 00:13:49.527 "nvme_io_md": false, 00:13:49.527 "write_zeroes": true, 00:13:49.527 "zcopy": false, 00:13:49.527 "get_zone_info": false, 00:13:49.527 "zone_management": false, 00:13:49.527 "zone_append": false, 00:13:49.527 "compare": false, 00:13:49.527 "compare_and_write": false, 00:13:49.527 "abort": false, 00:13:49.527 "seek_hole": true, 00:13:49.527 "seek_data": true, 00:13:49.527 "copy": false, 00:13:49.527 "nvme_iov_md": false 00:13:49.527 }, 00:13:49.527 "driver_specific": { 00:13:49.527 "lvol": { 00:13:49.527 "lvol_store_uuid": "dc170bff-9bc3-4d22-b34b-7035ba06eb20", 00:13:49.527 "base_bdev": "aio_bdev", 00:13:49.527 "thin_provision": false, 00:13:49.527 "num_allocated_clusters": 38, 00:13:49.527 "snapshot": false, 00:13:49.527 "clone": false, 00:13:49.527 "esnap_clone": false 00:13:49.527 } 00:13:49.527 } 00:13:49.527 } 00:13:49.527 ] 00:13:49.798 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:49.798 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:49.798 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:50.376 22:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:50.376 22:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:50.376 22:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:50.943 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:50.943 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6c26b24-16f5-44c2-9417-75fc3623c83e 00:13:51.512 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc170bff-9bc3-4d22-b34b-7035ba06eb20 00:13:52.081 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:52.650 00:13:52.650 real 0m28.235s 00:13:52.650 user 1m10.498s 00:13:52.650 sys 0m6.401s 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:52.650 ************************************ 00:13:52.650 END TEST lvs_grow_dirty 00:13:52.650 ************************************ 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:52.650 nvmf_trace.0 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.650 rmmod nvme_tcp 00:13:52.650 rmmod nvme_fabrics 00:13:52.650 rmmod nvme_keyring 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 791083 ']' 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 791083 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 791083 ']' 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 791083 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.650 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 791083 00:13:52.910 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:52.910 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:52.910 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 791083' 00:13:52.910 killing process with pid 791083 00:13:52.910 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 791083 00:13:52.910 22:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 791083 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.169 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.078 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:55.078 00:13:55.078 real 1m0.739s 00:13:55.078 user 1m45.582s 00:13:55.078 sys 0m12.565s 00:13:55.078 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.078 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:55.078 ************************************ 00:13:55.078 END TEST nvmf_lvs_grow 00:13:55.078 ************************************ 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:55.339 ************************************ 00:13:55.339 START TEST nvmf_bdev_io_wait 00:13:55.339 ************************************ 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:55.339 * Looking for test storage... 00:13:55.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:55.339 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.340 22:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:58.635 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.635 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:58.636 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:58.636 Found net devices under 0000:84:00.0: cvl_0_0 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:58.636 Found net devices under 0000:84:00.1: cvl_0_1 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:13:58.636 00:13:58.636 --- 10.0.0.2 ping statistics --- 00:13:58.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.636 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:13:58.636 00:13:58.636 --- 10.0.0.1 ping statistics --- 00:13:58.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.636 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.636 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=794159 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 794159 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 794159 ']' 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.637 22:54:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:58.897 [2024-07-22 22:54:34.965015] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:58.897 [2024-07-22 22:54:34.965213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.897 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.897 [2024-07-22 22:54:35.124914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.156 [2024-07-22 22:54:35.290249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.156 [2024-07-22 22:54:35.290371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.156 [2024-07-22 22:54:35.290409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.156 [2024-07-22 22:54:35.290439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.156 [2024-07-22 22:54:35.290465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.156 [2024-07-22 22:54:35.290633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.156 [2024-07-22 22:54:35.290695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.156 [2024-07-22 22:54:35.290733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.156 [2024-07-22 22:54:35.290736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 [2024-07-22 22:54:35.621424] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 Malloc0 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 [2024-07-22 22:54:35.692666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=794300 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=794302 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:59.413 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:59.413 { 00:13:59.413 "params": { 00:13:59.413 "name": "Nvme$subsystem", 00:13:59.414 "trtype": "$TEST_TRANSPORT", 00:13:59.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "$NVMF_PORT", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:59.414 "hdgst": ${hdgst:-false}, 00:13:59.414 "ddgst": ${ddgst:-false} 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 } 00:13:59.414 EOF 00:13:59.414 )") 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=794304 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:59.414 { 00:13:59.414 "params": { 00:13:59.414 "name": "Nvme$subsystem", 00:13:59.414 "trtype": "$TEST_TRANSPORT", 00:13:59.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "$NVMF_PORT", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:59.414 "hdgst": ${hdgst:-false}, 00:13:59.414 "ddgst": ${ddgst:-false} 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 } 00:13:59.414 EOF 00:13:59.414 )") 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=794307 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:59.414 { 00:13:59.414 "params": { 00:13:59.414 "name": "Nvme$subsystem", 00:13:59.414 "trtype": "$TEST_TRANSPORT", 00:13:59.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "$NVMF_PORT", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:59.414 "hdgst": ${hdgst:-false}, 00:13:59.414 "ddgst": ${ddgst:-false} 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 } 00:13:59.414 EOF 00:13:59.414 )") 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:59.414 { 00:13:59.414 "params": { 00:13:59.414 "name": "Nvme$subsystem", 00:13:59.414 "trtype": "$TEST_TRANSPORT", 00:13:59.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "$NVMF_PORT", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:59.414 "hdgst": ${hdgst:-false}, 00:13:59.414 "ddgst": ${ddgst:-false} 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 } 00:13:59.414 EOF 00:13:59.414 )") 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 794300 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:59.414 "params": { 00:13:59.414 "name": "Nvme1", 00:13:59.414 "trtype": "tcp", 00:13:59.414 "traddr": "10.0.0.2", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "4420", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.414 "hdgst": false, 00:13:59.414 "ddgst": false 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 }' 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:59.414 "params": { 00:13:59.414 "name": "Nvme1", 00:13:59.414 "trtype": "tcp", 00:13:59.414 "traddr": "10.0.0.2", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "4420", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.414 "hdgst": false, 00:13:59.414 "ddgst": false 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 }' 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:59.414 "params": { 00:13:59.414 "name": "Nvme1", 00:13:59.414 "trtype": "tcp", 00:13:59.414 "traddr": "10.0.0.2", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "4420", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.414 "hdgst": false, 00:13:59.414 "ddgst": false 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 }' 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:59.414 22:54:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:59.414 "params": { 00:13:59.414 "name": "Nvme1", 00:13:59.414 "trtype": "tcp", 00:13:59.414 "traddr": "10.0.0.2", 00:13:59.414 "adrfam": "ipv4", 00:13:59.414 "trsvcid": "4420", 00:13:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.414 "hdgst": false, 00:13:59.414 "ddgst": false 00:13:59.414 }, 00:13:59.414 "method": "bdev_nvme_attach_controller" 00:13:59.414 }' 00:13:59.671 [2024-07-22 22:54:35.742163] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:59.671 [2024-07-22 22:54:35.742165] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:59.671 [2024-07-22 22:54:35.742262] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-22 22:54:35.742262] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:59.671 --proc-type=auto ] 00:13:59.671 [2024-07-22 22:54:35.752186] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:59.671 [2024-07-22 22:54:35.752189] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:13:59.671 [2024-07-22 22:54:35.752286] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-22 22:54:35.752286] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:59.671 --proc-type=auto ] 00:13:59.671 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.671 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.671 [2024-07-22 22:54:35.941797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.929 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.929 [2024-07-22 22:54:36.038186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:59.929 [2024-07-22 22:54:36.064077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.929 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.929 [2024-07-22 22:54:36.156918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.929 [2024-07-22 22:54:36.159901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:59.929 [2024-07-22 22:54:36.240320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:00.186 [2024-07-22 22:54:36.247763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.187 [2024-07-22 22:54:36.333343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:00.445 Running I/O for 1 seconds... 00:14:00.445 Running I/O for 1 seconds... 00:14:00.445 Running I/O for 1 seconds... 00:14:00.445 Running I/O for 1 seconds... 00:14:01.381 00:14:01.381 Latency(us) 00:14:01.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.381 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:01.381 Nvme1n1 : 1.02 4656.85 18.19 0.00 0.00 27248.15 5995.33 39612.87 00:14:01.381 =================================================================================================================== 00:14:01.381 Total : 4656.85 18.19 0.00 0.00 27248.15 5995.33 39612.87 00:14:01.381 00:14:01.381 Latency(us) 00:14:01.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.381 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:01.381 Nvme1n1 : 1.01 4585.84 17.91 0.00 0.00 27796.59 7330.32 55535.69 00:14:01.381 =================================================================================================================== 00:14:01.381 Total : 4585.84 17.91 0.00 0.00 27796.59 7330.32 55535.69 00:14:01.381 00:14:01.381 Latency(us) 00:14:01.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.381 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:01.381 Nvme1n1 : 1.01 6657.98 26.01 0.00 0.00 19093.60 12913.02 32816.55 00:14:01.381 =================================================================================================================== 00:14:01.381 Total : 6657.98 26.01 0.00 0.00 19093.60 12913.02 32816.55 00:14:01.381 00:14:01.381 Latency(us) 00:14:01.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.381 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:01.381 Nvme1n1 : 1.00 143909.45 562.15 0.00 0.00 885.25 406.57 1055.86 00:14:01.381 =================================================================================================================== 00:14:01.381 Total : 143909.45 562.15 0.00 0.00 885.25 406.57 1055.86 00:14:01.640 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 794302 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 794304 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 794307 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.899 rmmod nvme_tcp 00:14:01.899 rmmod nvme_fabrics 00:14:01.899 rmmod nvme_keyring 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 794159 ']' 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 794159 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 794159 ']' 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 794159 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.899 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 794159 00:14:02.158 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:02.158 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:02.158 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 794159' 00:14:02.158 killing process with pid 794159 00:14:02.158 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 794159 00:14:02.158 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 794159 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.417 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.324 00:14:04.324 real 0m9.143s 00:14:04.324 user 0m19.935s 00:14:04.324 sys 0m4.786s 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:04.324 ************************************ 00:14:04.324 END TEST nvmf_bdev_io_wait 00:14:04.324 ************************************ 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.324 22:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:04.583 ************************************ 00:14:04.584 START TEST nvmf_queue_depth 00:14:04.584 ************************************ 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:04.584 * Looking for test storage... 00:14:04.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.584 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:07.912 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:07.912 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.912 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:07.913 Found net devices under 0000:84:00.0: cvl_0_0 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:07.913 Found net devices under 0000:84:00.1: cvl_0_1 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.913 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:14:07.913 00:14:07.913 --- 10.0.0.2 ping statistics --- 00:14:07.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.913 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:14:07.913 00:14:07.913 --- 10.0.0.1 ping statistics --- 00:14:07.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.913 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=796683 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 796683 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 796683 ']' 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.913 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.172 [2024-07-22 22:54:44.319390] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:14:08.172 [2024-07-22 22:54:44.319556] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.172 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.172 [2024-07-22 22:54:44.449356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.431 [2024-07-22 22:54:44.560415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.431 [2024-07-22 22:54:44.560480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.431 [2024-07-22 22:54:44.560500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.431 [2024-07-22 22:54:44.560516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.431 [2024-07-22 22:54:44.560530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.431 [2024-07-22 22:54:44.560566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.431 [2024-07-22 22:54:44.735413] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.431 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 Malloc0 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 [2024-07-22 22:54:44.801817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=796707 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 796707 /var/tmp/bdevperf.sock 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 796707 ']' 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.690 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 [2024-07-22 22:54:44.860436] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:14:08.690 [2024-07-22 22:54:44.860528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796707 ] 00:14:08.690 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.690 [2024-07-22 22:54:44.942093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.949 [2024-07-22 22:54:45.050382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.918 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.918 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:09.918 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:09.918 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.918 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:10.176 NVMe0n1 00:14:10.176 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.176 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:10.176 Running I/O for 10 seconds... 00:14:22.383 00:14:22.383 Latency(us) 00:14:22.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.383 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:22.383 Verification LBA range: start 0x0 length 0x4000 00:14:22.383 NVMe0n1 : 10.09 6822.38 26.65 0.00 0.00 149165.89 16117.00 91653.31 00:14:22.383 =================================================================================================================== 00:14:22.383 Total : 6822.38 26.65 0.00 0.00 149165.89 16117.00 91653.31 00:14:22.384 0 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 796707 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 796707 ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 796707 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796707 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796707' 00:14:22.384 killing process with pid 796707 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 796707 00:14:22.384 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.384 00:14:22.384 Latency(us) 00:14:22.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.384 =================================================================================================================== 00:14:22.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 796707 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.384 rmmod nvme_tcp 00:14:22.384 rmmod nvme_fabrics 00:14:22.384 rmmod nvme_keyring 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 796683 ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 796683 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 796683 ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 796683 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796683 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796683' 00:14:22.384 killing process with pid 796683 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 796683 00:14:22.384 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 796683 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.384 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.324 00:14:23.324 real 0m18.698s 00:14:23.324 user 0m25.651s 00:14:23.324 sys 0m4.586s 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.324 ************************************ 00:14:23.324 END TEST nvmf_queue_depth 00:14:23.324 ************************************ 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:23.324 ************************************ 00:14:23.324 START TEST nvmf_target_multipath 00:14:23.324 ************************************ 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:23.324 * Looking for test storage... 00:14:23.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.324 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.325 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:26.622 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:26.622 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:26.622 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:26.623 Found net devices under 0000:84:00.0: cvl_0_0 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:26.623 Found net devices under 0000:84:00.1: cvl_0_1 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.623 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:14:26.883 00:14:26.883 --- 10.0.0.2 ping statistics --- 00:14:26.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.883 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:14:26.883 00:14:26.883 --- 10.0.0.1 ping statistics --- 00:14:26.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.883 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.883 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:26.883 only one NIC for nvmf test 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.883 rmmod nvme_tcp 00:14:26.883 rmmod nvme_fabrics 00:14:26.883 rmmod nvme_keyring 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.883 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:29.422 00:14:29.422 real 0m5.711s 00:14:29.422 user 0m1.106s 00:14:29.422 sys 0m2.629s 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:29.422 ************************************ 00:14:29.422 END TEST nvmf_target_multipath 00:14:29.422 ************************************ 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:29.422 ************************************ 00:14:29.422 START TEST nvmf_zcopy 00:14:29.422 ************************************ 00:14:29.422 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:29.423 * Looking for test storage... 00:14:29.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.423 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.713 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:32.714 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:32.714 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:32.714 Found net devices under 0000:84:00.0: cvl_0_0 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:32.714 Found net devices under 0000:84:00.1: cvl_0_1 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:14:32.714 00:14:32.714 --- 10.0.0.2 ping statistics --- 00:14:32.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.714 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:32.714 00:14:32.714 --- 10.0.0.1 ping statistics --- 00:14:32.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.714 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.714 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=802191 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 802191 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 802191 ']' 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.715 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.715 [2024-07-22 22:55:08.773459] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:14:32.715 [2024-07-22 22:55:08.773628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.715 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.715 [2024-07-22 22:55:08.903506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.715 [2024-07-22 22:55:09.014323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.715 [2024-07-22 22:55:09.014391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.715 [2024-07-22 22:55:09.014410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.715 [2024-07-22 22:55:09.014426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.715 [2024-07-22 22:55:09.014440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.715 [2024-07-22 22:55:09.014488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.973 [2024-07-22 22:55:09.185823] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.973 [2024-07-22 22:55:09.202018] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.973 malloc0 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:32.973 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:32.973 { 00:14:32.973 "params": { 00:14:32.973 "name": "Nvme$subsystem", 00:14:32.973 "trtype": "$TEST_TRANSPORT", 00:14:32.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.973 "adrfam": "ipv4", 00:14:32.973 "trsvcid": "$NVMF_PORT", 00:14:32.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.974 "hdgst": ${hdgst:-false}, 00:14:32.974 "ddgst": ${ddgst:-false} 00:14:32.974 }, 00:14:32.974 "method": "bdev_nvme_attach_controller" 00:14:32.974 } 00:14:32.974 EOF 00:14:32.974 )") 00:14:32.974 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:32.974 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:32.974 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:32.974 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:32.974 "params": { 00:14:32.974 "name": "Nvme1", 00:14:32.974 "trtype": "tcp", 00:14:32.974 "traddr": "10.0.0.2", 00:14:32.974 "adrfam": "ipv4", 00:14:32.974 "trsvcid": "4420", 00:14:32.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.974 "hdgst": false, 00:14:32.974 "ddgst": false 00:14:32.974 }, 00:14:32.974 "method": "bdev_nvme_attach_controller" 00:14:32.974 }' 00:14:33.232 [2024-07-22 22:55:09.348254] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:14:33.232 [2024-07-22 22:55:09.348443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802279 ] 00:14:33.232 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.232 [2024-07-22 22:55:09.467908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.491 [2024-07-22 22:55:09.576856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.749 Running I/O for 10 seconds... 00:14:43.729 00:14:43.729 Latency(us) 00:14:43.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.729 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:43.729 Verification LBA range: start 0x0 length 0x1000 00:14:43.729 Nvme1n1 : 10.06 4482.77 35.02 0.00 0.00 28354.03 983.04 42525.58 00:14:43.729 =================================================================================================================== 00:14:43.729 Total : 4482.77 35.02 0.00 0.00 28354.03 983.04 42525.58 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=803471 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:43.993 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:43.993 { 00:14:43.993 "params": { 00:14:43.993 "name": "Nvme$subsystem", 00:14:43.993 "trtype": "$TEST_TRANSPORT", 00:14:43.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:43.993 "adrfam": "ipv4", 00:14:43.993 "trsvcid": "$NVMF_PORT", 00:14:43.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:43.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:43.994 "hdgst": ${hdgst:-false}, 00:14:43.994 "ddgst": ${ddgst:-false} 00:14:43.994 }, 00:14:43.994 "method": "bdev_nvme_attach_controller" 00:14:43.994 } 00:14:43.994 EOF 00:14:43.994 )") 00:14:43.994 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:43.994 [2024-07-22 22:55:20.184330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.184390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:43.994 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:43.994 22:55:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:43.994 "params": { 00:14:43.994 "name": "Nvme1", 00:14:43.994 "trtype": "tcp", 00:14:43.994 "traddr": "10.0.0.2", 00:14:43.994 "adrfam": "ipv4", 00:14:43.994 "trsvcid": "4420", 00:14:43.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.994 "hdgst": false, 00:14:43.994 "ddgst": false 00:14:43.994 }, 00:14:43.994 "method": "bdev_nvme_attach_controller" 00:14:43.994 }' 00:14:43.994 [2024-07-22 22:55:20.192276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.192322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.200298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.200345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.208328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.208371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.216355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.216385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.224371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.224401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.232394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.232425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.240412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.240442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.248429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.248459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.256458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.256488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.264475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.264504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.272496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.272533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.275699] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:14:43.994 [2024-07-22 22:55:20.275941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803471 ] 00:14:43.994 [2024-07-22 22:55:20.280519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.280550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.288539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.288569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:43.994 [2024-07-22 22:55:20.296574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:43.994 [2024-07-22 22:55:20.296611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.304607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.304645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.312627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.312669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.320635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.320668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.328653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.328683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.336676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.336709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.344695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.344726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.352718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.352747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.297 [2024-07-22 22:55:20.360742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.360772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.368764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.368793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.376785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.376815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.384807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.384836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.392831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.392861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.400852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.400881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.403983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.297 [2024-07-22 22:55:20.408879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.408909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.416917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.416955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.424921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.424953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.432943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.432974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.440965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.440994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.448997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.449030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.457012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.457043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.465053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.465087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.473076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.473113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.481081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.481112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.489102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.489132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.497123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.497153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.297 [2024-07-22 22:55:20.505145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.297 [2024-07-22 22:55:20.505175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.513170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.513199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.513486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.298 [2024-07-22 22:55:20.521194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.521223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.529222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.529253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.537260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.537296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.545284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.545327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.553305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.553353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.561328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.561363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.569351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.569388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.577423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.577480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.585435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.585479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.298 [2024-07-22 22:55:20.593449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.298 [2024-07-22 22:55:20.593495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.558 [2024-07-22 22:55:20.601440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.558 [2024-07-22 22:55:20.601479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.558 [2024-07-22 22:55:20.609461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.558 [2024-07-22 22:55:20.609498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.558 [2024-07-22 22:55:20.617484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.558 [2024-07-22 22:55:20.617527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.558 [2024-07-22 22:55:20.625493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.558 [2024-07-22 22:55:20.625528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.633510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.633540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.641532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.641563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.649568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.649602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.657606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.657641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.665618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.665652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.673642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.673676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.681662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.681695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.689689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.689723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.697711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.697743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.705737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.705772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.713825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.713861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.721834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.721865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 Running I/O for 5 seconds... 00:14:44.559 [2024-07-22 22:55:20.734438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.734476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.746983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.747021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.759541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.759579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.773900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.773938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.788730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.788768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.803792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.803830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.818427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.818464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.832515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.832552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.846770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.846807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.559 [2024-07-22 22:55:20.861051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.559 [2024-07-22 22:55:20.861088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.876212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.876251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.890773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.890810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.904967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.905004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.919693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.919730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.934067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.934105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.949000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.949048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.963432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.963470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.977414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.977451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:20.991454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:20.991491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.005608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.005645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.019749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.019787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.034230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.034268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.049047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.049084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.063660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.063698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.077944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.077981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.092428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.092466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.107367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.107404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:44.818 [2024-07-22 22:55:21.121936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:44.818 [2024-07-22 22:55:21.121974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.136980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.137019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.150641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.150679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.165058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.165095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.179651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.179689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.194531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.194569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.209290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.209342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.223655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.223709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.237720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.237757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.252348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.252387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.266356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.266395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.281221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.281259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.295765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.295803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.309737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.309777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.323889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.323928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.338141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.338179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.352108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.352146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.366836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.366874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.078 [2024-07-22 22:55:21.381066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.078 [2024-07-22 22:55:21.381103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.396201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.396241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.410597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.410646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.424876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.424917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.439053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.439091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.453257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.453295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.467526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.467563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.482018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.482059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.496611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.496659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.510692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.510730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.524941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.524980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.539081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.539118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.553193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.553230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.567533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.567571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.581791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.581829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.596275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.596324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.610389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.610427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.624391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.624427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.337 [2024-07-22 22:55:21.639366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.337 [2024-07-22 22:55:21.639403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.654756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.654794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.669296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.669343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.684074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.684111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.699061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.699100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.714162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.714200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.728403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.728440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.742523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.742560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.757234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.757271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.771526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.771573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.786478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.786516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.801305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.801352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.815692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.815729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.829891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.829929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.844454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.844492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.858937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.858974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.873837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.873874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.886761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.886799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.597 [2024-07-22 22:55:21.900696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.597 [2024-07-22 22:55:21.900733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.856 [2024-07-22 22:55:21.916022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.856 [2024-07-22 22:55:21.916060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.856 [2024-07-22 22:55:21.931026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.856 [2024-07-22 22:55:21.931063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.856 [2024-07-22 22:55:21.945478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.856 [2024-07-22 22:55:21.945515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.856 [2024-07-22 22:55:21.959821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.856 [2024-07-22 22:55:21.959858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.856 [2024-07-22 22:55:21.974117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.856 [2024-07-22 22:55:21.974153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.856 [2024-07-22 22:55:21.988826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:21.988863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.003299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.003346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.017460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.017498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.031788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.031825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.045574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.045622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.060089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.060127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.074443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.074481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.089407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.089445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.103525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.103562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.117854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.117891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.132274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.132324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.146430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.146467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.857 [2024-07-22 22:55:22.160482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.857 [2024-07-22 22:55:22.160520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.174643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.174680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.188838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.188875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.203020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.203057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.217042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.217080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.231197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.231234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.245481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.245518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.259507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.259545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.274159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.274196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.288604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.288641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.302910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.302947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.316762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.316800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.116 [2024-07-22 22:55:22.330955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.116 [2024-07-22 22:55:22.330991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.117 [2024-07-22 22:55:22.345431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.117 [2024-07-22 22:55:22.345468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.117 [2024-07-22 22:55:22.359253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.117 [2024-07-22 22:55:22.359290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.117 [2024-07-22 22:55:22.373320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.117 [2024-07-22 22:55:22.373357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.117 [2024-07-22 22:55:22.387467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.117 [2024-07-22 22:55:22.387504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.117 [2024-07-22 22:55:22.402165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.117 [2024-07-22 22:55:22.402202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.117 [2024-07-22 22:55:22.417058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.117 [2024-07-22 22:55:22.417095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.432329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.432367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.445612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.445649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.457451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.457488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.471972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.472009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.486606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.486644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.501092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.501131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.515180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.515218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.529425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.529463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.543567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.543605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.558403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.558442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.572338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.572375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.586917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.586956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.601122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.601160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.615689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.615728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.630293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.630342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.644865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.644903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.659267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.659304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.376 [2024-07-22 22:55:22.673764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.376 [2024-07-22 22:55:22.673802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.688833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.688872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.703992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.704031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.718217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.718256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.731985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.732022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.746624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.746661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.760660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.760697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.775367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.775405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.789882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.789920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.804274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.804320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.818361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.818398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.832664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.832701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.847071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.847108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.861529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.861567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.875561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.875598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.889589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.635 [2024-07-22 22:55:22.889627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.635 [2024-07-22 22:55:22.903902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.636 [2024-07-22 22:55:22.903940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.636 [2024-07-22 22:55:22.918845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.636 [2024-07-22 22:55:22.918882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.636 [2024-07-22 22:55:22.933483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.636 [2024-07-22 22:55:22.933520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.894 [2024-07-22 22:55:22.948369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.894 [2024-07-22 22:55:22.948408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.894 [2024-07-22 22:55:22.962427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.894 [2024-07-22 22:55:22.962465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.894 [2024-07-22 22:55:22.976744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.894 [2024-07-22 22:55:22.976782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.894 [2024-07-22 22:55:22.990743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.894 [2024-07-22 22:55:22.990781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.005169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.005206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.019229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.019267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.034036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.034074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.048027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.048064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.062656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.062694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.077281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.077332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.092016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.092053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.106435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.106477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.121006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.121052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.135440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.135478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.149147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.149185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.163840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.163877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.178154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.178191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.895 [2024-07-22 22:55:23.192737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.895 [2024-07-22 22:55:23.192775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.207676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.207715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.222495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.222534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.236809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.236846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.250982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.251020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.265268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.265305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.279668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.279706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.294851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.294889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.309750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.309788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.323840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.323878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.338241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.338278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.352569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.352606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.366540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.366577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.381270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.381307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.395134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.395181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.409657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.409695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.423567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.423604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.437971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.438008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.153 [2024-07-22 22:55:23.452219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.153 [2024-07-22 22:55:23.452256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.467269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.467328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.481498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.481536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.496206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.496248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.510387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.510425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.524878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.524917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.539253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.539290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.553714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.553752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.568382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.568419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.582402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.582440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.596217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.596255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.610966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.611004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.625928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.625966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.640453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.640491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.654989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.655026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.669401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.669446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.683947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.683984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.698640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.698677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.412 [2024-07-22 22:55:23.713429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.412 [2024-07-22 22:55:23.713466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.671 [2024-07-22 22:55:23.728368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.671 [2024-07-22 22:55:23.728407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.671 [2024-07-22 22:55:23.743142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.671 [2024-07-22 22:55:23.743180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.671 [2024-07-22 22:55:23.757524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.671 [2024-07-22 22:55:23.757563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.671 [2024-07-22 22:55:23.772766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.671 [2024-07-22 22:55:23.772805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.671 [2024-07-22 22:55:23.785095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.785134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.799401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.799438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.813665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.813703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.828215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.828253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.842680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.842718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.857322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.857367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.871917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.871956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.886206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.886244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.900705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.900743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.915125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.915163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.929121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.929159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.943339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.943390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.958057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.958094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.672 [2024-07-22 22:55:23.972822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.672 [2024-07-22 22:55:23.972859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:23.987777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:23.987816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.002321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.002362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.016947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.016985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.031441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.031479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.045931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.045968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.060177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.060214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.074329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.074367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.089120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.089158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.103474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.103511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.117704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.117741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.132112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.132149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.146433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.146470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.160390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.160427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.174623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.174660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.189095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.189133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.931 [2024-07-22 22:55:24.203199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.931 [2024-07-22 22:55:24.203236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.932 [2024-07-22 22:55:24.217198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.932 [2024-07-22 22:55:24.217246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.932 [2024-07-22 22:55:24.231447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.932 [2024-07-22 22:55:24.231484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.246253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.246292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.260799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.260836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.275476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.275513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.289797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.289835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.304230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.304267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.318417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.318454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.333089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.333128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.347672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.347709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.362107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.362144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.376631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.376668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.390571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.190 [2024-07-22 22:55:24.390608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.190 [2024-07-22 22:55:24.404786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.191 [2024-07-22 22:55:24.404822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.191 [2024-07-22 22:55:24.418826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.191 [2024-07-22 22:55:24.418863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.191 [2024-07-22 22:55:24.432889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.191 [2024-07-22 22:55:24.432927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.191 [2024-07-22 22:55:24.447035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.191 [2024-07-22 22:55:24.447073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.191 [2024-07-22 22:55:24.461508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.191 [2024-07-22 22:55:24.461544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.191 [2024-07-22 22:55:24.475808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.191 [2024-07-22 22:55:24.475846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.191 [2024-07-22 22:55:24.489392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.191 [2024-07-22 22:55:24.489429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.504111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.504149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.518474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.518513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.532360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.532398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.546277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.546324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.560880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.560917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.575223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.575260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.589776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.589814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.604333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.604370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.618364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.618402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.632295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.632343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.646740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.646777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.664745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.664782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.678943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.678981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.693177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.693213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.707756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.707795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.722293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.722343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.736491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.736528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.450 [2024-07-22 22:55:24.750405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.450 [2024-07-22 22:55:24.750443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.766057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.766096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.780659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.780697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.794881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.794919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.808675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.808712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.822892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.822929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.837512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.837555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.851402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.851439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.866142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.866179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.880439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.880476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.894765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.894801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.909138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.909176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.923046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.923084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.937382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.937420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.951818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.709 [2024-07-22 22:55:24.951856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.709 [2024-07-22 22:55:24.966601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.710 [2024-07-22 22:55:24.966640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.710 [2024-07-22 22:55:24.981636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.710 [2024-07-22 22:55:24.981674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.710 [2024-07-22 22:55:24.995493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.710 [2024-07-22 22:55:24.995532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.710 [2024-07-22 22:55:25.009929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.710 [2024-07-22 22:55:25.009967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.025002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.025042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.039147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.039186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.053016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.053055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.067054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.067092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.081432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.081470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.095878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.095916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.110268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.110306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.124426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.124465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.138475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.138512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.152705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.152743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.166852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.166890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.181379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.181417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.195597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.195634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.209731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.209769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.223683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.223721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.238021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.238059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.252191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.252228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.969 [2024-07-22 22:55:25.266376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.969 [2024-07-22 22:55:25.266413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.228 [2024-07-22 22:55:25.280752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.228 [2024-07-22 22:55:25.280798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.228 [2024-07-22 22:55:25.295104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.228 [2024-07-22 22:55:25.295142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.228 [2024-07-22 22:55:25.309343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.228 [2024-07-22 22:55:25.309381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.228 [2024-07-22 22:55:25.323981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.228 [2024-07-22 22:55:25.324018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.228 [2024-07-22 22:55:25.338403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.228 [2024-07-22 22:55:25.338440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.228 [2024-07-22 22:55:25.352823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.228 [2024-07-22 22:55:25.352860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.367374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.367412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.381926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.381964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.396209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.396246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.411122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.411159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.424886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.424923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.438671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.438708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.453362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.453399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.468248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.468284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.482176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.482214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.496343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.496380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.510429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.510466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.524681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.524718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.229 [2024-07-22 22:55:25.538793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.229 [2024-07-22 22:55:25.538832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.552638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.552676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.566994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.567042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.581104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.581142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.595496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.595535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.610753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.610790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.624999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.625037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.638753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.638791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.652876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.652913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.667408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.667447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.681908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.681947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.696394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.696431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.710857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.710895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.725639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.725676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.740108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.740145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.748834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.748870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 00:14:49.489 Latency(us) 00:14:49.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.489 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:49.489 Nvme1n1 : 5.01 8821.30 68.92 0.00 0.00 14486.89 6359.42 22816.24 00:14:49.489 =================================================================================================================== 00:14:49.489 Total : 8821.30 68.92 0.00 0.00 14486.89 6359.42 22816.24 00:14:49.489 [2024-07-22 22:55:25.755303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.755347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.763332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.763367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.771356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.771403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.779404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.779450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.787432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.787482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.489 [2024-07-22 22:55:25.795460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.489 [2024-07-22 22:55:25.795509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.803522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.803582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.811497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.811544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.819524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.819572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.827546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.827596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.835567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.835612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.843589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.843636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.851613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.851660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.859635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.859684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.867704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.867754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.875722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.875773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.883736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.883779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.891759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.891806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.899773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.899820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.907798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.907844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.915806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.915844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.923811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.923854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.931839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.931873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.939899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.939946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.947914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.947962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.955936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.955982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.963924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.963955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.971956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.971989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.980009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.980054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.988031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.988076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:25.996051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:25.996095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:26.004043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:26.004073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:26.012066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:26.012096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.748 [2024-07-22 22:55:26.028106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.748 [2024-07-22 22:55:26.028137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (803471) - No such process 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 803471 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:49.749 delay0 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.749 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.008 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.008 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:50.008 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.008 [2024-07-22 22:55:26.206466] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:58.125 Initializing NVMe Controllers 00:14:58.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:58.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:58.125 Initialization complete. Launching workers. 00:14:58.125 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 12577 00:14:58.125 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12742, failed to submit 100 00:14:58.125 success 12624, unsuccess 118, failed 0 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.125 rmmod nvme_tcp 00:14:58.125 rmmod nvme_fabrics 00:14:58.125 rmmod nvme_keyring 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 802191 ']' 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 802191 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 802191 ']' 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 802191 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 802191 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 802191' 00:14:58.125 killing process with pid 802191 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 802191 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 802191 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.125 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.505 00:14:59.505 real 0m30.484s 00:14:59.505 user 0m42.085s 00:14:59.505 sys 0m11.412s 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:59.505 ************************************ 00:14:59.505 END TEST nvmf_zcopy 00:14:59.505 ************************************ 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:59.505 ************************************ 00:14:59.505 START TEST nvmf_nmic 00:14:59.505 ************************************ 00:14:59.505 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:59.766 * Looking for test storage... 00:14:59.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.766 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.062 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:03.063 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:03.063 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:03.063 Found net devices under 0000:84:00.0: cvl_0_0 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:03.063 Found net devices under 0000:84:00.1: cvl_0_1 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:03.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:15:03.063 00:15:03.063 --- 10.0.0.2 ping statistics --- 00:15:03.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.063 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:15:03.063 00:15:03.063 --- 10.0.0.1 ping statistics --- 00:15:03.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.063 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=806943 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 806943 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 806943 ']' 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.063 22:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:03.324 [2024-07-22 22:55:39.386431] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:15:03.324 [2024-07-22 22:55:39.386595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.324 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.324 [2024-07-22 22:55:39.543737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.582 [2024-07-22 22:55:39.698389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.582 [2024-07-22 22:55:39.698456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.582 [2024-07-22 22:55:39.698477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.582 [2024-07-22 22:55:39.698494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.582 [2024-07-22 22:55:39.698509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.582 [2024-07-22 22:55:39.698612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.582 [2024-07-22 22:55:39.698677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.582 [2024-07-22 22:55:39.698737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.582 [2024-07-22 22:55:39.698740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.517 [2024-07-22 22:55:40.759808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.517 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 Malloc0 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.518 [2024-07-22 22:55:40.817992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:04.518 test case1: single bdev can't be used in multiple subsystems 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.518 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.776 [2024-07-22 22:55:40.841819] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:04.776 [2024-07-22 22:55:40.841863] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:04.776 [2024-07-22 22:55:40.841890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.776 request: 00:15:04.776 { 00:15:04.776 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:04.776 "namespace": { 00:15:04.776 "bdev_name": "Malloc0", 00:15:04.776 "no_auto_visible": false 00:15:04.776 }, 00:15:04.776 "method": "nvmf_subsystem_add_ns", 00:15:04.776 "req_id": 1 00:15:04.776 } 00:15:04.776 Got JSON-RPC error response 00:15:04.776 response: 00:15:04.776 { 00:15:04.776 "code": -32602, 00:15:04.776 "message": "Invalid parameters" 00:15:04.776 } 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:04.776 Adding namespace failed - expected result. 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:04.776 test case2: host connect to nvmf target in multiple paths 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.776 [2024-07-22 22:55:40.853981] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.776 22:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.342 22:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:05.909 22:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:05.909 22:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:05.909 22:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.909 22:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:05.909 22:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:07.848 22:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:07.848 22:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:07.848 22:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.106 22:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.106 22:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.106 22:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:08.106 22:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:08.106 [global] 00:15:08.106 thread=1 00:15:08.106 invalidate=1 00:15:08.106 rw=write 00:15:08.106 time_based=1 00:15:08.106 runtime=1 00:15:08.106 ioengine=libaio 00:15:08.106 direct=1 00:15:08.106 bs=4096 00:15:08.106 iodepth=1 00:15:08.106 norandommap=0 00:15:08.106 numjobs=1 00:15:08.106 00:15:08.106 verify_dump=1 00:15:08.106 verify_backlog=512 00:15:08.106 verify_state_save=0 00:15:08.106 do_verify=1 00:15:08.106 verify=crc32c-intel 00:15:08.106 [job0] 00:15:08.106 filename=/dev/nvme0n1 00:15:08.106 Could not set queue depth (nvme0n1) 00:15:08.364 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.364 fio-3.35 00:15:08.364 Starting 1 thread 00:15:09.300 00:15:09.300 job0: (groupid=0, jobs=1): err= 0: pid=807631: Mon Jul 22 22:55:45 2024 00:15:09.300 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:15:09.300 slat (nsec): min=19481, max=34571, avg=33189.00, stdev=3156.12 00:15:09.300 clat (usec): min=40863, max=41986, avg=41093.87, stdev=357.50 00:15:09.300 lat (usec): min=40897, max=42020, avg=41127.06, stdev=357.83 00:15:09.300 clat percentiles (usec): 00:15:09.300 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:09.300 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:09.300 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:15:09.300 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:09.300 | 99.99th=[42206] 00:15:09.300 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:15:09.300 slat (nsec): min=9030, max=71521, avg=23496.90, stdev=9429.08 00:15:09.300 clat (usec): min=171, max=576, avg=258.92, stdev=53.02 00:15:09.300 lat (usec): min=181, max=617, avg=282.41, stdev=58.06 00:15:09.300 clat percentiles (usec): 00:15:09.300 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:15:09.300 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 249], 00:15:09.300 | 70.00th=[ 273], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 363], 00:15:09.300 | 99.00th=[ 383], 99.50th=[ 429], 99.90th=[ 578], 99.95th=[ 578], 00:15:09.300 | 99.99th=[ 578] 00:15:09.300 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:09.300 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:09.300 lat (usec) : 250=57.97%, 500=37.90%, 750=0.19% 00:15:09.300 lat (msec) : 50=3.94% 00:15:09.300 cpu : usr=0.59%, sys=1.19%, ctx=534, majf=0, minf=2 00:15:09.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.300 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.300 00:15:09.300 Run status group 0 (all jobs): 00:15:09.300 READ: bw=83.1KiB/s (85.1kB/s), 83.1KiB/s-83.1KiB/s (85.1kB/s-85.1kB/s), io=84.0KiB (86.0kB), run=1011-1011msec 00:15:09.300 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:15:09.300 00:15:09.300 Disk stats (read/write): 00:15:09.300 nvme0n1: ios=45/512, merge=0/0, ticks=1725/132, in_queue=1857, util=98.60% 00:15:09.300 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:09.559 rmmod nvme_tcp 00:15:09.559 rmmod nvme_fabrics 00:15:09.559 rmmod nvme_keyring 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 806943 ']' 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 806943 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 806943 ']' 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 806943 00:15:09.559 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:09.819 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.819 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 806943 00:15:09.819 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:09.819 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:09.819 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 806943' 00:15:09.819 killing process with pid 806943 00:15:09.819 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 806943 00:15:09.819 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 806943 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.079 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.617 00:15:12.617 real 0m12.531s 00:15:12.617 user 0m27.971s 00:15:12.617 sys 0m3.671s 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.617 ************************************ 00:15:12.617 END TEST nvmf_nmic 00:15:12.617 ************************************ 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:12.617 ************************************ 00:15:12.617 START TEST nvmf_fio_target 00:15:12.617 ************************************ 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:12.617 * Looking for test storage... 00:15:12.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:12.617 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.911 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:15.912 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:15.912 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:15.912 Found net devices under 0000:84:00.0: cvl_0_0 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:15.912 Found net devices under 0000:84:00.1: cvl_0_1 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:15:15.912 00:15:15.912 --- 10.0.0.2 ping statistics --- 00:15:15.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.912 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:15:15.912 00:15:15.912 --- 10.0.0.1 ping statistics --- 00:15:15.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.912 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:15:15.912 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=809920 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 809920 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 809920 ']' 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.913 22:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.913 [2024-07-22 22:55:51.975664] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:15:15.913 [2024-07-22 22:55:51.975842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.913 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.913 [2024-07-22 22:55:52.132405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.172 [2024-07-22 22:55:52.292823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.172 [2024-07-22 22:55:52.292887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.172 [2024-07-22 22:55:52.292907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.172 [2024-07-22 22:55:52.292924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.172 [2024-07-22 22:55:52.292946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.172 [2024-07-22 22:55:52.293014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.172 [2024-07-22 22:55:52.293077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.172 [2024-07-22 22:55:52.293109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.172 [2024-07-22 22:55:52.293112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.430 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.430 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:16.430 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.430 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.430 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.430 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.430 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:16.997 [2024-07-22 22:55:53.140987] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.997 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:17.563 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:17.563 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:18.497 22:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:18.497 22:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:19.064 22:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:19.064 22:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:19.631 22:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:19.631 22:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:20.198 22:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.764 22:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:20.765 22:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.332 22:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:21.332 22:55:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.267 22:55:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:22.267 22:55:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:22.526 22:55:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:23.092 22:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:23.092 22:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.351 22:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:23.351 22:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.609 22:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.868 [2024-07-22 22:56:00.106574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.868 22:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:24.125 22:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:25.059 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.628 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:25.628 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:25.628 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.628 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:25.628 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:25.628 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:27.558 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:27.558 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:27.558 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.558 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:27.558 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.558 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:27.558 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:27.558 [global] 00:15:27.558 thread=1 00:15:27.558 invalidate=1 00:15:27.558 rw=write 00:15:27.558 time_based=1 00:15:27.558 runtime=1 00:15:27.558 ioengine=libaio 00:15:27.558 direct=1 00:15:27.558 bs=4096 00:15:27.558 iodepth=1 00:15:27.558 norandommap=0 00:15:27.558 numjobs=1 00:15:27.558 00:15:27.558 verify_dump=1 00:15:27.558 verify_backlog=512 00:15:27.558 verify_state_save=0 00:15:27.558 do_verify=1 00:15:27.558 verify=crc32c-intel 00:15:27.558 [job0] 00:15:27.558 filename=/dev/nvme0n1 00:15:27.558 [job1] 00:15:27.558 filename=/dev/nvme0n2 00:15:27.558 [job2] 00:15:27.558 filename=/dev/nvme0n3 00:15:27.558 [job3] 00:15:27.558 filename=/dev/nvme0n4 00:15:27.558 Could not set queue depth (nvme0n1) 00:15:27.558 Could not set queue depth (nvme0n2) 00:15:27.558 Could not set queue depth (nvme0n3) 00:15:27.558 Could not set queue depth (nvme0n4) 00:15:27.816 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.816 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.816 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.816 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.816 fio-3.35 00:15:27.816 Starting 4 threads 00:15:29.190 00:15:29.190 job0: (groupid=0, jobs=1): err= 0: pid=811395: Mon Jul 22 22:56:05 2024 00:15:29.190 read: IOPS=1262, BW=5051KiB/s (5172kB/s)(5056KiB/1001msec) 00:15:29.190 slat (nsec): min=6397, max=43228, avg=17353.22, stdev=4591.07 00:15:29.190 clat (usec): min=257, max=1732, avg=420.61, stdev=88.86 00:15:29.190 lat (usec): min=265, max=1751, avg=437.96, stdev=90.55 00:15:29.190 clat percentiles (usec): 00:15:29.190 | 1.00th=[ 277], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 351], 00:15:29.190 | 30.00th=[ 367], 40.00th=[ 396], 50.00th=[ 424], 60.00th=[ 445], 00:15:29.190 | 70.00th=[ 461], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 523], 00:15:29.191 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[ 1614], 99.95th=[ 1729], 00:15:29.191 | 99.99th=[ 1729] 00:15:29.191 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:29.191 slat (nsec): min=8576, max=63894, avg=16218.01, stdev=6355.12 00:15:29.191 clat (usec): min=169, max=3861, avg=266.26, stdev=114.61 00:15:29.191 lat (usec): min=179, max=3886, avg=282.48, stdev=116.53 00:15:29.191 clat percentiles (usec): 00:15:29.191 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:15:29.191 | 30.00th=[ 223], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 281], 00:15:29.191 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 359], 00:15:29.191 | 99.00th=[ 420], 99.50th=[ 449], 99.90th=[ 1696], 99.95th=[ 3851], 00:15:29.191 | 99.99th=[ 3851] 00:15:29.191 bw ( KiB/s): min= 7192, max= 7192, per=51.72%, avg=7192.00, stdev= 0.00, samples=1 00:15:29.191 iops : min= 1798, max= 1798, avg=1798.00, stdev= 0.00, samples=1 00:15:29.191 lat (usec) : 250=27.50%, 500=66.39%, 750=5.82%, 1000=0.11% 00:15:29.191 lat (msec) : 2=0.14%, 4=0.04% 00:15:29.191 cpu : usr=2.60%, sys=4.60%, ctx=2802, majf=0, minf=1 00:15:29.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 issued rwts: total=1264,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.191 job1: (groupid=0, jobs=1): err= 0: pid=811396: Mon Jul 22 22:56:05 2024 00:15:29.191 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:15:29.191 slat (nsec): min=10539, max=45494, avg=23496.86, stdev=9472.34 00:15:29.191 clat (usec): min=436, max=41203, avg=39107.02, stdev=8638.36 00:15:29.191 lat (usec): min=481, max=41236, avg=39130.51, stdev=8633.49 00:15:29.191 clat percentiles (usec): 00:15:29.191 | 1.00th=[ 437], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:29.191 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.191 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:29.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:29.191 | 99.99th=[41157] 00:15:29.191 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:15:29.191 slat (usec): min=8, max=107, avg=16.02, stdev= 9.16 00:15:29.191 clat (usec): min=189, max=1348, avg=265.22, stdev=72.78 00:15:29.191 lat (usec): min=200, max=1379, avg=281.25, stdev=75.92 00:15:29.191 clat percentiles (usec): 00:15:29.191 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:15:29.191 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 269], 00:15:29.191 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 351], 00:15:29.191 | 99.00th=[ 396], 99.50th=[ 453], 99.90th=[ 1352], 99.95th=[ 1352], 00:15:29.191 | 99.99th=[ 1352] 00:15:29.191 bw ( KiB/s): min= 4096, max= 4096, per=29.46%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.191 lat (usec) : 250=45.88%, 500=49.81%, 1000=0.19% 00:15:29.191 lat (msec) : 2=0.19%, 50=3.93% 00:15:29.191 cpu : usr=0.40%, sys=0.99%, ctx=534, majf=0, minf=2 00:15:29.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.191 job2: (groupid=0, jobs=1): err= 0: pid=811397: Mon Jul 22 22:56:05 2024 00:15:29.191 read: IOPS=603, BW=2413KiB/s (2471kB/s)(2488KiB/1031msec) 00:15:29.191 slat (nsec): min=8471, max=49344, avg=17840.18, stdev=6851.03 00:15:29.191 clat (usec): min=251, max=41110, avg=1127.15, stdev=5591.62 00:15:29.191 lat (usec): min=260, max=41128, avg=1144.99, stdev=5592.79 00:15:29.191 clat percentiles (usec): 00:15:29.191 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:15:29.191 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:15:29.191 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 437], 00:15:29.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:29.191 | 99.99th=[41157] 00:15:29.191 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:15:29.191 slat (nsec): min=7515, max=67415, avg=21227.33, stdev=8535.53 00:15:29.191 clat (usec): min=193, max=465, avg=281.23, stdev=45.25 00:15:29.191 lat (usec): min=205, max=510, avg=302.46, stdev=47.83 00:15:29.191 clat percentiles (usec): 00:15:29.191 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 249], 00:15:29.191 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:15:29.191 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 351], 95.00th=[ 383], 00:15:29.191 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 461], 99.95th=[ 465], 00:15:29.191 | 99.99th=[ 465] 00:15:29.191 bw ( KiB/s): min= 8192, max= 8192, per=58.91%, avg=8192.00, stdev= 0.00, samples=1 00:15:29.191 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:29.191 lat (usec) : 250=14.70%, 500=83.90%, 750=0.67% 00:15:29.191 lat (msec) : 50=0.73% 00:15:29.191 cpu : usr=2.04%, sys=4.08%, ctx=1646, majf=0, minf=1 00:15:29.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 issued rwts: total=622,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.191 job3: (groupid=0, jobs=1): err= 0: pid=811398: Mon Jul 22 22:56:05 2024 00:15:29.191 read: IOPS=180, BW=721KiB/s (738kB/s)(740KiB/1027msec) 00:15:29.191 slat (nsec): min=8665, max=53691, avg=20811.43, stdev=8755.53 00:15:29.191 clat (usec): min=252, max=41335, avg=4736.81, stdev=12384.32 00:15:29.191 lat (usec): min=261, max=41346, avg=4757.62, stdev=12384.84 00:15:29.191 clat percentiles (usec): 00:15:29.191 | 1.00th=[ 260], 5.00th=[ 306], 10.00th=[ 326], 20.00th=[ 412], 00:15:29.191 | 30.00th=[ 437], 40.00th=[ 453], 50.00th=[ 490], 60.00th=[ 510], 00:15:29.191 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[40633], 95.00th=[41157], 00:15:29.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:29.191 | 99.99th=[41157] 00:15:29.191 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:15:29.191 slat (nsec): min=8157, max=65874, avg=16671.62, stdev=7040.89 00:15:29.191 clat (usec): min=182, max=723, avg=263.01, stdev=52.53 00:15:29.191 lat (usec): min=202, max=736, avg=279.68, stdev=55.55 00:15:29.191 clat percentiles (usec): 00:15:29.191 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 223], 00:15:29.191 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 258], 00:15:29.191 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 359], 00:15:29.191 | 99.00th=[ 396], 99.50th=[ 441], 99.90th=[ 725], 99.95th=[ 725], 00:15:29.191 | 99.99th=[ 725] 00:15:29.191 bw ( KiB/s): min= 4096, max= 4096, per=29.46%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.191 lat (usec) : 250=40.46%, 500=47.06%, 750=9.61% 00:15:29.191 lat (msec) : 50=2.87% 00:15:29.191 cpu : usr=0.58%, sys=1.36%, ctx=698, majf=0, minf=1 00:15:29.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.191 issued rwts: total=185,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.191 00:15:29.191 Run status group 0 (all jobs): 00:15:29.191 READ: bw=8120KiB/s (8315kB/s), 87.4KiB/s-5051KiB/s (89.5kB/s-5172kB/s), io=8372KiB (8573kB), run=1001-1031msec 00:15:29.191 WRITE: bw=13.6MiB/s (14.2MB/s), 1994KiB/s-6138KiB/s (2042kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1031msec 00:15:29.191 00:15:29.191 Disk stats (read/write): 00:15:29.191 nvme0n1: ios=1077/1056, merge=0/0, ticks=1348/294, in_queue=1642, util=96.09% 00:15:29.191 nvme0n2: ios=29/512, merge=0/0, ticks=628/128, in_queue=756, util=81.40% 00:15:29.191 nvme0n3: ios=616/1024, merge=0/0, ticks=444/284, in_queue=728, util=86.92% 00:15:29.191 nvme0n4: ios=200/512, merge=0/0, ticks=1528/125, in_queue=1653, util=96.40% 00:15:29.191 22:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:29.191 [global] 00:15:29.191 thread=1 00:15:29.191 invalidate=1 00:15:29.191 rw=randwrite 00:15:29.191 time_based=1 00:15:29.191 runtime=1 00:15:29.191 ioengine=libaio 00:15:29.191 direct=1 00:15:29.191 bs=4096 00:15:29.191 iodepth=1 00:15:29.191 norandommap=0 00:15:29.191 numjobs=1 00:15:29.191 00:15:29.191 verify_dump=1 00:15:29.191 verify_backlog=512 00:15:29.191 verify_state_save=0 00:15:29.191 do_verify=1 00:15:29.191 verify=crc32c-intel 00:15:29.191 [job0] 00:15:29.191 filename=/dev/nvme0n1 00:15:29.191 [job1] 00:15:29.191 filename=/dev/nvme0n2 00:15:29.191 [job2] 00:15:29.191 filename=/dev/nvme0n3 00:15:29.191 [job3] 00:15:29.191 filename=/dev/nvme0n4 00:15:29.191 Could not set queue depth (nvme0n1) 00:15:29.191 Could not set queue depth (nvme0n2) 00:15:29.191 Could not set queue depth (nvme0n3) 00:15:29.191 Could not set queue depth (nvme0n4) 00:15:29.450 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.450 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.450 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.450 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.450 fio-3.35 00:15:29.450 Starting 4 threads 00:15:30.823 00:15:30.823 job0: (groupid=0, jobs=1): err= 0: pid=811628: Mon Jul 22 22:56:06 2024 00:15:30.823 read: IOPS=35, BW=141KiB/s (145kB/s)(144KiB/1019msec) 00:15:30.823 slat (nsec): min=16432, max=59098, avg=29562.83, stdev=8068.45 00:15:30.823 clat (usec): min=463, max=41412, avg=23048.32, stdev=20434.28 00:15:30.823 lat (usec): min=495, max=41444, avg=23077.88, stdev=20436.41 00:15:30.823 clat percentiles (usec): 00:15:30.823 | 1.00th=[ 465], 5.00th=[ 474], 10.00th=[ 478], 20.00th=[ 523], 00:15:30.823 | 30.00th=[ 537], 40.00th=[ 594], 50.00th=[40633], 60.00th=[41157], 00:15:30.823 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:30.823 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:30.823 | 99.99th=[41157] 00:15:30.823 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:15:30.823 slat (nsec): min=10481, max=59453, avg=28021.02, stdev=6295.09 00:15:30.823 clat (usec): min=199, max=625, avg=331.84, stdev=64.16 00:15:30.823 lat (usec): min=210, max=654, avg=359.86, stdev=66.03 00:15:30.823 clat percentiles (usec): 00:15:30.823 | 1.00th=[ 233], 5.00th=[ 251], 10.00th=[ 269], 20.00th=[ 285], 00:15:30.823 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:15:30.823 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 416], 95.00th=[ 457], 00:15:30.824 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 627], 99.95th=[ 627], 00:15:30.824 | 99.99th=[ 627] 00:15:30.824 bw ( KiB/s): min= 4087, max= 4087, per=41.27%, avg=4087.00, stdev= 0.00, samples=1 00:15:30.824 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:15:30.824 lat (usec) : 250=3.83%, 500=88.87%, 750=3.65% 00:15:30.824 lat (msec) : 50=3.65% 00:15:30.824 cpu : usr=1.08%, sys=1.86%, ctx=548, majf=0, minf=1 00:15:30.824 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.824 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.824 job1: (groupid=0, jobs=1): err= 0: pid=811629: Mon Jul 22 22:56:06 2024 00:15:30.824 read: IOPS=503, BW=2015KiB/s (2064kB/s)(2084KiB/1034msec) 00:15:30.824 slat (nsec): min=7769, max=51202, avg=16287.74, stdev=5865.92 00:15:30.824 clat (usec): min=221, max=41177, avg=1331.41, stdev=6353.71 00:15:30.824 lat (usec): min=229, max=41186, avg=1347.70, stdev=6355.22 00:15:30.824 clat percentiles (usec): 00:15:30.824 | 1.00th=[ 233], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 281], 00:15:30.824 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 318], 00:15:30.824 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 408], 95.00th=[ 453], 00:15:30.824 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:30.824 | 99.99th=[41157] 00:15:30.824 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:15:30.824 slat (nsec): min=9336, max=70611, avg=23751.20, stdev=9491.19 00:15:30.824 clat (usec): min=166, max=617, avg=291.31, stdev=77.97 00:15:30.824 lat (usec): min=177, max=656, avg=315.06, stdev=84.08 00:15:30.824 clat percentiles (usec): 00:15:30.824 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 194], 20.00th=[ 217], 00:15:30.824 | 30.00th=[ 233], 40.00th=[ 253], 50.00th=[ 293], 60.00th=[ 314], 00:15:30.824 | 70.00th=[ 330], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 424], 00:15:30.824 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 529], 99.95th=[ 619], 00:15:30.824 | 99.99th=[ 619] 00:15:30.824 bw ( KiB/s): min= 4087, max= 4096, per=41.31%, avg=4091.50, stdev= 6.36, samples=2 00:15:30.824 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:15:30.824 lat (usec) : 250=27.51%, 500=71.33%, 750=0.32% 00:15:30.824 lat (msec) : 50=0.84% 00:15:30.824 cpu : usr=2.42%, sys=3.97%, ctx=1545, majf=0, minf=2 00:15:30.824 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.824 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.824 job2: (groupid=0, jobs=1): err= 0: pid=811632: Mon Jul 22 22:56:06 2024 00:15:30.824 read: IOPS=261, BW=1045KiB/s (1070kB/s)(1056KiB/1011msec) 00:15:30.824 slat (nsec): min=9428, max=55396, avg=20400.31, stdev=6308.50 00:15:30.824 clat (usec): min=282, max=41930, avg=3138.23, stdev=10283.61 00:15:30.824 lat (usec): min=301, max=41965, avg=3158.63, stdev=10286.90 00:15:30.824 clat percentiles (usec): 00:15:30.824 | 1.00th=[ 285], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 338], 00:15:30.824 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:15:30.824 | 70.00th=[ 367], 80.00th=[ 396], 90.00th=[ 469], 95.00th=[41157], 00:15:30.824 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:30.824 | 99.99th=[41681] 00:15:30.824 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:15:30.824 slat (nsec): min=9263, max=50693, avg=22225.90, stdev=5149.00 00:15:30.824 clat (usec): min=235, max=518, avg=314.95, stdev=35.42 00:15:30.824 lat (usec): min=250, max=543, avg=337.18, stdev=36.17 00:15:30.824 clat percentiles (usec): 00:15:30.824 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 289], 00:15:30.824 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:15:30.824 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 383], 00:15:30.824 | 99.00th=[ 416], 99.50th=[ 469], 99.90th=[ 519], 99.95th=[ 519], 00:15:30.824 | 99.99th=[ 519] 00:15:30.824 bw ( KiB/s): min= 4087, max= 4087, per=41.27%, avg=4087.00, stdev= 0.00, samples=1 00:15:30.824 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:15:30.824 lat (usec) : 250=0.90%, 500=96.13%, 750=0.52%, 1000=0.13% 00:15:30.824 lat (msec) : 50=2.32% 00:15:30.824 cpu : usr=0.89%, sys=1.58%, ctx=777, majf=0, minf=1 00:15:30.824 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 issued rwts: total=264,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.824 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.824 job3: (groupid=0, jobs=1): err= 0: pid=811633: Mon Jul 22 22:56:06 2024 00:15:30.824 read: IOPS=401, BW=1605KiB/s (1643kB/s)(1608KiB/1002msec) 00:15:30.824 slat (nsec): min=6974, max=77019, avg=25779.10, stdev=11691.03 00:15:30.824 clat (usec): min=249, max=42179, avg=2016.73, stdev=7983.79 00:15:30.824 lat (usec): min=257, max=42213, avg=2042.51, stdev=7985.10 00:15:30.824 clat percentiles (usec): 00:15:30.824 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 322], 00:15:30.824 | 30.00th=[ 347], 40.00th=[ 367], 50.00th=[ 388], 60.00th=[ 412], 00:15:30.824 | 70.00th=[ 445], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[ 537], 00:15:30.824 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:15:30.824 | 99.99th=[42206] 00:15:30.824 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:15:30.824 slat (usec): min=8, max=111, avg=20.77, stdev= 6.86 00:15:30.824 clat (usec): min=247, max=626, avg=318.97, stdev=37.81 00:15:30.824 lat (usec): min=267, max=645, avg=339.74, stdev=38.88 00:15:30.824 clat percentiles (usec): 00:15:30.824 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:15:30.824 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:15:30.824 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 396], 00:15:30.824 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 627], 99.95th=[ 627], 00:15:30.824 | 99.99th=[ 627] 00:15:30.824 bw ( KiB/s): min= 4087, max= 4087, per=41.27%, avg=4087.00, stdev= 0.00, samples=1 00:15:30.824 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:15:30.824 lat (usec) : 250=0.22%, 500=94.09%, 750=3.94% 00:15:30.824 lat (msec) : 50=1.75% 00:15:30.824 cpu : usr=1.20%, sys=2.00%, ctx=914, majf=0, minf=1 00:15:30.824 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.824 issued rwts: total=402,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.824 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.824 00:15:30.824 Run status group 0 (all jobs): 00:15:30.824 READ: bw=4731KiB/s (4845kB/s), 141KiB/s-2015KiB/s (145kB/s-2064kB/s), io=4892KiB (5009kB), run=1002-1034msec 00:15:30.824 WRITE: bw=9903KiB/s (10.1MB/s), 2010KiB/s-3961KiB/s (2058kB/s-4056kB/s), io=10.0MiB (10.5MB), run=1002-1034msec 00:15:30.824 00:15:30.824 Disk stats (read/write): 00:15:30.824 nvme0n1: ios=69/512, merge=0/0, ticks=612/172, in_queue=784, util=80.14% 00:15:30.824 nvme0n2: ios=520/1024, merge=0/0, ticks=638/274, in_queue=912, util=84.83% 00:15:30.824 nvme0n3: ios=316/512, merge=0/0, ticks=775/160, in_queue=935, util=99.22% 00:15:30.824 nvme0n4: ios=396/512, merge=0/0, ticks=560/157, in_queue=717, util=88.72% 00:15:30.824 22:56:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:30.824 [global] 00:15:30.824 thread=1 00:15:30.824 invalidate=1 00:15:30.824 rw=write 00:15:30.824 time_based=1 00:15:30.824 runtime=1 00:15:30.824 ioengine=libaio 00:15:30.824 direct=1 00:15:30.824 bs=4096 00:15:30.824 iodepth=128 00:15:30.824 norandommap=0 00:15:30.824 numjobs=1 00:15:30.824 00:15:30.824 verify_dump=1 00:15:30.824 verify_backlog=512 00:15:30.824 verify_state_save=0 00:15:30.824 do_verify=1 00:15:30.824 verify=crc32c-intel 00:15:30.824 [job0] 00:15:30.824 filename=/dev/nvme0n1 00:15:30.824 [job1] 00:15:30.824 filename=/dev/nvme0n2 00:15:30.824 [job2] 00:15:30.824 filename=/dev/nvme0n3 00:15:30.824 [job3] 00:15:30.824 filename=/dev/nvme0n4 00:15:30.824 Could not set queue depth (nvme0n1) 00:15:30.824 Could not set queue depth (nvme0n2) 00:15:30.824 Could not set queue depth (nvme0n3) 00:15:30.824 Could not set queue depth (nvme0n4) 00:15:31.083 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.083 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.083 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.083 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.083 fio-3.35 00:15:31.083 Starting 4 threads 00:15:32.459 00:15:32.459 job0: (groupid=0, jobs=1): err= 0: pid=811866: Mon Jul 22 22:56:08 2024 00:15:32.459 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:15:32.459 slat (usec): min=8, max=5737, avg=117.55, stdev=616.64 00:15:32.459 clat (usec): min=9986, max=39861, avg=16170.31, stdev=2883.99 00:15:32.459 lat (usec): min=10009, max=39876, avg=16287.86, stdev=2929.63 00:15:32.459 clat percentiles (usec): 00:15:32.459 | 1.00th=[10814], 5.00th=[12387], 10.00th=[12518], 20.00th=[12780], 00:15:32.459 | 30.00th=[13435], 40.00th=[15401], 50.00th=[16712], 60.00th=[17957], 00:15:32.459 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19530], 95.00th=[20055], 00:15:32.459 | 99.00th=[20841], 99.50th=[22676], 99.90th=[23725], 99.95th=[24773], 00:15:32.459 | 99.99th=[40109] 00:15:32.459 write: IOPS=3453, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1006msec); 0 zone resets 00:15:32.459 slat (usec): min=6, max=40735, avg=170.37, stdev=1473.13 00:15:32.459 clat (usec): min=1517, max=119292, avg=22000.26, stdev=17999.86 00:15:32.459 lat (msec): min=10, max=119, avg=22.17, stdev=18.14 00:15:32.459 clat percentiles (msec): 00:15:32.459 | 1.00th=[ 13], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:15:32.459 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:15:32.459 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 42], 95.00th=[ 61], 00:15:32.459 | 99.00th=[ 102], 99.50th=[ 103], 99.90th=[ 103], 99.95th=[ 117], 00:15:32.459 | 99.99th=[ 120] 00:15:32.459 bw ( KiB/s): min=10384, max=16384, per=28.77%, avg=13384.00, stdev=4242.64, samples=2 00:15:32.459 iops : min= 2596, max= 4096, avg=3346.00, stdev=1060.66, samples=2 00:15:32.459 lat (msec) : 2=0.02%, 10=0.05%, 20=90.02%, 50=5.71%, 100=3.53% 00:15:32.459 lat (msec) : 250=0.67% 00:15:32.459 cpu : usr=6.37%, sys=10.15%, ctx=260, majf=0, minf=1 00:15:32.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:32.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.459 issued rwts: total=3072,3474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.459 job1: (groupid=0, jobs=1): err= 0: pid=811877: Mon Jul 22 22:56:08 2024 00:15:32.459 read: IOPS=3664, BW=14.3MiB/s (15.0MB/s)(15.0MiB/1047msec) 00:15:32.459 slat (usec): min=3, max=13677, avg=123.44, stdev=734.48 00:15:32.459 clat (usec): min=3818, max=64877, avg=18025.69, stdev=8644.74 00:15:32.459 lat (usec): min=3829, max=71385, avg=18149.12, stdev=8661.72 00:15:32.459 clat percentiles (usec): 00:15:32.459 | 1.00th=[ 7308], 5.00th=[12256], 10.00th=[12911], 20.00th=[14222], 00:15:32.459 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15533], 60.00th=[16188], 00:15:32.459 | 70.00th=[17695], 80.00th=[20055], 90.00th=[23200], 95.00th=[28181], 00:15:32.459 | 99.00th=[64226], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:15:32.459 | 99.99th=[64750] 00:15:32.459 write: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1047msec); 0 zone resets 00:15:32.459 slat (usec): min=4, max=25763, avg=115.34, stdev=775.52 00:15:32.459 clat (usec): min=626, max=62142, avg=15186.26, stdev=6802.76 00:15:32.459 lat (usec): min=646, max=62155, avg=15301.61, stdev=6835.00 00:15:32.459 clat percentiles (usec): 00:15:32.459 | 1.00th=[ 2311], 5.00th=[ 7046], 10.00th=[10945], 20.00th=[12256], 00:15:32.459 | 30.00th=[12911], 40.00th=[14353], 50.00th=[14746], 60.00th=[15401], 00:15:32.459 | 70.00th=[16057], 80.00th=[16909], 90.00th=[18220], 95.00th=[26346], 00:15:32.459 | 99.00th=[51119], 99.50th=[56361], 99.90th=[62129], 99.95th=[62129], 00:15:32.459 | 99.99th=[62129] 00:15:32.459 bw ( KiB/s): min=15968, max=16800, per=35.22%, avg=16384.00, stdev=588.31, samples=2 00:15:32.459 iops : min= 3992, max= 4200, avg=4096.00, stdev=147.08, samples=2 00:15:32.459 lat (usec) : 750=0.10%, 1000=0.09% 00:15:32.459 lat (msec) : 2=0.13%, 4=1.50%, 10=3.95%, 20=81.65%, 50=10.42% 00:15:32.459 lat (msec) : 100=2.17% 00:15:32.459 cpu : usr=4.97%, sys=9.27%, ctx=420, majf=0, minf=1 00:15:32.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:32.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.459 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.459 job2: (groupid=0, jobs=1): err= 0: pid=811909: Mon Jul 22 22:56:08 2024 00:15:32.460 read: IOPS=1901, BW=7607KiB/s (7789kB/s)(7660KiB/1007msec) 00:15:32.460 slat (usec): min=5, max=23696, avg=212.71, stdev=1351.42 00:15:32.460 clat (usec): min=4480, max=63250, avg=28498.20, stdev=9547.21 00:15:32.460 lat (usec): min=12896, max=63265, avg=28710.92, stdev=9660.66 00:15:32.460 clat percentiles (usec): 00:15:32.460 | 1.00th=[13173], 5.00th=[17695], 10.00th=[18482], 20.00th=[20579], 00:15:32.460 | 30.00th=[22676], 40.00th=[25297], 50.00th=[27395], 60.00th=[27395], 00:15:32.460 | 70.00th=[30016], 80.00th=[35390], 90.00th=[46924], 95.00th=[47973], 00:15:32.460 | 99.00th=[56361], 99.50th=[56361], 99.90th=[59507], 99.95th=[63177], 00:15:32.460 | 99.99th=[63177] 00:15:32.460 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:15:32.460 slat (usec): min=4, max=34376, avg=268.99, stdev=1881.56 00:15:32.460 clat (usec): min=1048, max=115647, avg=35684.24, stdev=21338.79 00:15:32.460 lat (usec): min=1063, max=115695, avg=35953.23, stdev=21540.05 00:15:32.460 clat percentiles (msec): 00:15:32.460 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 19], 00:15:32.460 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 26], 60.00th=[ 33], 00:15:32.460 | 70.00th=[ 42], 80.00th=[ 56], 90.00th=[ 71], 95.00th=[ 77], 00:15:32.460 | 99.00th=[ 94], 99.50th=[ 94], 99.90th=[ 106], 99.95th=[ 107], 00:15:32.460 | 99.99th=[ 116] 00:15:32.460 bw ( KiB/s): min= 8192, max= 8192, per=17.61%, avg=8192.00, stdev= 0.00, samples=2 00:15:32.460 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:15:32.460 lat (msec) : 2=0.08%, 10=0.03%, 20=21.80%, 50=64.29%, 100=13.73% 00:15:32.460 lat (msec) : 250=0.08% 00:15:32.460 cpu : usr=3.58%, sys=5.86%, ctx=177, majf=0, minf=1 00:15:32.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:15:32.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.460 issued rwts: total=1915,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.460 job3: (groupid=0, jobs=1): err= 0: pid=811922: Mon Jul 22 22:56:08 2024 00:15:32.460 read: IOPS=2121, BW=8485KiB/s (8689kB/s)(8536KiB/1006msec) 00:15:32.460 slat (usec): min=3, max=37120, avg=238.10, stdev=1887.52 00:15:32.460 clat (msec): min=5, max=104, avg=27.79, stdev=17.94 00:15:32.460 lat (msec): min=5, max=104, avg=28.03, stdev=18.12 00:15:32.460 clat percentiles (msec): 00:15:32.460 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 16], 20.00th=[ 16], 00:15:32.460 | 30.00th=[ 17], 40.00th=[ 20], 50.00th=[ 20], 60.00th=[ 21], 00:15:32.460 | 70.00th=[ 26], 80.00th=[ 36], 90.00th=[ 63], 95.00th=[ 69], 00:15:32.460 | 99.00th=[ 74], 99.50th=[ 74], 99.90th=[ 93], 99.95th=[ 94], 00:15:32.460 | 99.99th=[ 105] 00:15:32.460 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:15:32.460 slat (usec): min=5, max=12750, avg=180.21, stdev=848.20 00:15:32.460 clat (usec): min=5900, max=74410, avg=26566.89, stdev=12147.88 00:15:32.460 lat (usec): min=5921, max=74445, avg=26747.10, stdev=12208.83 00:15:32.460 clat percentiles (usec): 00:15:32.460 | 1.00th=[ 8848], 5.00th=[14091], 10.00th=[15401], 20.00th=[16319], 00:15:32.460 | 30.00th=[16581], 40.00th=[20579], 50.00th=[22938], 60.00th=[28443], 00:15:32.460 | 70.00th=[32637], 80.00th=[36439], 90.00th=[38536], 95.00th=[46400], 00:15:32.460 | 99.00th=[71828], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:15:32.460 | 99.99th=[73925] 00:15:32.460 bw ( KiB/s): min= 8744, max=11408, per=21.66%, avg=10076.00, stdev=1883.73, samples=2 00:15:32.460 iops : min= 2186, max= 2852, avg=2519.00, stdev=470.93, samples=2 00:15:32.460 lat (msec) : 10=2.85%, 20=42.22%, 50=44.48%, 100=10.42%, 250=0.02% 00:15:32.460 cpu : usr=2.59%, sys=6.17%, ctx=261, majf=0, minf=1 00:15:32.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:32.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.460 issued rwts: total=2134,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.460 00:15:32.460 Run status group 0 (all jobs): 00:15:32.460 READ: bw=40.9MiB/s (42.9MB/s), 7607KiB/s-14.3MiB/s (7789kB/s-15.0MB/s), io=42.8MiB (44.9MB), run=1006-1047msec 00:15:32.460 WRITE: bw=45.4MiB/s (47.6MB/s), 8135KiB/s-15.3MiB/s (8330kB/s-16.0MB/s), io=47.6MiB (49.9MB), run=1006-1047msec 00:15:32.460 00:15:32.460 Disk stats (read/write): 00:15:32.460 nvme0n1: ios=3113/3137, merge=0/0, ticks=16794/18651, in_queue=35445, util=98.60% 00:15:32.460 nvme0n2: ios=3123/3327, merge=0/0, ticks=23305/23307, in_queue=46612, util=96.72% 00:15:32.460 nvme0n3: ios=1393/1536, merge=0/0, ticks=18282/25939, in_queue=44221, util=98.61% 00:15:32.460 nvme0n4: ios=1594/2048, merge=0/0, ticks=23643/26164, in_queue=49807, util=99.57% 00:15:32.460 22:56:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:32.460 [global] 00:15:32.460 thread=1 00:15:32.460 invalidate=1 00:15:32.460 rw=randwrite 00:15:32.460 time_based=1 00:15:32.460 runtime=1 00:15:32.460 ioengine=libaio 00:15:32.460 direct=1 00:15:32.460 bs=4096 00:15:32.460 iodepth=128 00:15:32.460 norandommap=0 00:15:32.460 numjobs=1 00:15:32.460 00:15:32.460 verify_dump=1 00:15:32.460 verify_backlog=512 00:15:32.460 verify_state_save=0 00:15:32.460 do_verify=1 00:15:32.460 verify=crc32c-intel 00:15:32.460 [job0] 00:15:32.460 filename=/dev/nvme0n1 00:15:32.460 [job1] 00:15:32.460 filename=/dev/nvme0n2 00:15:32.460 [job2] 00:15:32.460 filename=/dev/nvme0n3 00:15:32.460 [job3] 00:15:32.460 filename=/dev/nvme0n4 00:15:32.460 Could not set queue depth (nvme0n1) 00:15:32.460 Could not set queue depth (nvme0n2) 00:15:32.460 Could not set queue depth (nvme0n3) 00:15:32.460 Could not set queue depth (nvme0n4) 00:15:32.718 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.718 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.718 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.718 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.718 fio-3.35 00:15:32.718 Starting 4 threads 00:15:34.094 00:15:34.094 job0: (groupid=0, jobs=1): err= 0: pid=812212: Mon Jul 22 22:56:10 2024 00:15:34.094 read: IOPS=3273, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1003msec) 00:15:34.094 slat (usec): min=3, max=16217, avg=142.15, stdev=837.04 00:15:34.094 clat (usec): min=981, max=43111, avg=17844.17, stdev=5850.35 00:15:34.094 lat (usec): min=4812, max=43126, avg=17986.32, stdev=5885.74 00:15:34.094 clat percentiles (usec): 00:15:34.094 | 1.00th=[ 5473], 5.00th=[10028], 10.00th=[12387], 20.00th=[13960], 00:15:34.094 | 30.00th=[14484], 40.00th=[15401], 50.00th=[17171], 60.00th=[18220], 00:15:34.094 | 70.00th=[18744], 80.00th=[20055], 90.00th=[27919], 95.00th=[30016], 00:15:34.094 | 99.00th=[33424], 99.50th=[34341], 99.90th=[43254], 99.95th=[43254], 00:15:34.094 | 99.99th=[43254] 00:15:34.094 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:15:34.094 slat (usec): min=5, max=18083, avg=137.03, stdev=864.42 00:15:34.094 clat (usec): min=8212, max=67945, avg=18998.45, stdev=9311.57 00:15:34.094 lat (usec): min=8223, max=67988, avg=19135.47, stdev=9392.62 00:15:34.094 clat percentiles (usec): 00:15:34.094 | 1.00th=[10945], 5.00th=[11731], 10.00th=[13566], 20.00th=[14091], 00:15:34.094 | 30.00th=[14484], 40.00th=[14746], 50.00th=[16188], 60.00th=[17433], 00:15:34.094 | 70.00th=[17957], 80.00th=[19268], 90.00th=[29492], 95.00th=[45876], 00:15:34.094 | 99.00th=[52167], 99.50th=[58459], 99.90th=[60556], 99.95th=[64750], 00:15:34.094 | 99.99th=[67634] 00:15:34.094 bw ( KiB/s): min=12368, max=16304, per=27.55%, avg=14336.00, stdev=2783.17, samples=2 00:15:34.094 iops : min= 3092, max= 4076, avg=3584.00, stdev=695.79, samples=2 00:15:34.094 lat (usec) : 1000=0.01% 00:15:34.094 lat (msec) : 10=2.29%, 20=79.35%, 50=16.66%, 100=1.69% 00:15:34.094 cpu : usr=4.99%, sys=8.58%, ctx=311, majf=0, minf=11 00:15:34.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:34.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.094 issued rwts: total=3283,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.094 job1: (groupid=0, jobs=1): err= 0: pid=812214: Mon Jul 22 22:56:10 2024 00:15:34.094 read: IOPS=3187, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1012msec) 00:15:34.094 slat (usec): min=2, max=44619, avg=158.03, stdev=1206.69 00:15:34.094 clat (usec): min=513, max=58203, avg=19738.80, stdev=8589.60 00:15:34.094 lat (usec): min=8575, max=58208, avg=19896.83, stdev=8641.81 00:15:34.094 clat percentiles (usec): 00:15:34.094 | 1.00th=[ 8717], 5.00th=[12256], 10.00th=[13304], 20.00th=[14484], 00:15:34.094 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15926], 60.00th=[17433], 00:15:34.094 | 70.00th=[19268], 80.00th=[25822], 90.00th=[31851], 95.00th=[35390], 00:15:34.094 | 99.00th=[53740], 99.50th=[57934], 99.90th=[57934], 99.95th=[58459], 00:15:34.094 | 99.99th=[58459] 00:15:34.094 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:15:34.094 slat (usec): min=3, max=22055, avg=128.31, stdev=918.66 00:15:34.094 clat (usec): min=3888, max=79279, avg=17933.45, stdev=8076.62 00:15:34.094 lat (usec): min=3899, max=79289, avg=18061.77, stdev=8141.08 00:15:34.094 clat percentiles (usec): 00:15:34.094 | 1.00th=[ 7570], 5.00th=[10552], 10.00th=[13042], 20.00th=[13960], 00:15:34.094 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 00:15:34.094 | 70.00th=[16188], 80.00th=[18744], 90.00th=[28181], 95.00th=[33817], 00:15:34.094 | 99.00th=[48497], 99.50th=[58459], 99.90th=[70779], 99.95th=[70779], 00:15:34.094 | 99.99th=[79168] 00:15:34.094 bw ( KiB/s): min=12040, max=16416, per=27.34%, avg=14228.00, stdev=3094.30, samples=2 00:15:34.094 iops : min= 3010, max= 4104, avg=3557.00, stdev=773.57, samples=2 00:15:34.094 lat (usec) : 750=0.01% 00:15:34.094 lat (msec) : 4=0.10%, 10=2.98%, 20=74.58%, 50=21.28%, 100=1.04% 00:15:34.095 cpu : usr=5.93%, sys=6.53%, ctx=264, majf=0, minf=9 00:15:34.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:34.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.095 issued rwts: total=3226,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.095 job2: (groupid=0, jobs=1): err= 0: pid=812215: Mon Jul 22 22:56:10 2024 00:15:34.095 read: IOPS=2776, BW=10.8MiB/s (11.4MB/s)(11.0MiB/1010msec) 00:15:34.095 slat (usec): min=3, max=17309, avg=158.11, stdev=1049.63 00:15:34.095 clat (usec): min=4463, max=64030, avg=20757.94, stdev=8576.50 00:15:34.095 lat (usec): min=8451, max=64044, avg=20916.04, stdev=8628.61 00:15:34.095 clat percentiles (usec): 00:15:34.095 | 1.00th=[10552], 5.00th=[14484], 10.00th=[15270], 20.00th=[15795], 00:15:34.095 | 30.00th=[16909], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:15:34.095 | 70.00th=[20055], 80.00th=[22414], 90.00th=[29492], 95.00th=[43779], 00:15:34.095 | 99.00th=[56361], 99.50th=[56361], 99.90th=[64226], 99.95th=[64226], 00:15:34.095 | 99.99th=[64226] 00:15:34.095 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:15:34.095 slat (usec): min=4, max=18751, avg=168.91, stdev=1160.21 00:15:34.095 clat (usec): min=1120, max=64124, avg=22719.61, stdev=11871.20 00:15:34.095 lat (usec): min=1156, max=64160, avg=22888.52, stdev=11964.06 00:15:34.095 clat percentiles (usec): 00:15:34.095 | 1.00th=[ 3064], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[13829], 00:15:34.095 | 30.00th=[16581], 40.00th=[17171], 50.00th=[18482], 60.00th=[20055], 00:15:34.095 | 70.00th=[23987], 80.00th=[33162], 90.00th=[44827], 95.00th=[47973], 00:15:34.095 | 99.00th=[51643], 99.50th=[52167], 99.90th=[62129], 99.95th=[63177], 00:15:34.095 | 99.99th=[64226] 00:15:34.095 bw ( KiB/s): min=12168, max=12408, per=23.61%, avg=12288.00, stdev=169.71, samples=2 00:15:34.095 iops : min= 3042, max= 3102, avg=3072.00, stdev=42.43, samples=2 00:15:34.095 lat (msec) : 2=0.29%, 4=0.31%, 10=3.80%, 20=60.72%, 50=32.79% 00:15:34.095 lat (msec) : 100=2.09% 00:15:34.095 cpu : usr=3.77%, sys=6.05%, ctx=259, majf=0, minf=13 00:15:34.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:34.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.095 issued rwts: total=2804,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.095 job3: (groupid=0, jobs=1): err= 0: pid=812216: Mon Jul 22 22:56:10 2024 00:15:34.095 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:15:34.095 slat (usec): min=4, max=33397, avg=177.62, stdev=1224.45 00:15:34.095 clat (usec): min=8357, max=94301, avg=22915.04, stdev=12558.42 00:15:34.095 lat (msec): min=8, max=105, avg=23.09, stdev=12.66 00:15:34.095 clat percentiles (usec): 00:15:34.095 | 1.00th=[11076], 5.00th=[12911], 10.00th=[14484], 20.00th=[15533], 00:15:34.095 | 30.00th=[15664], 40.00th=[17433], 50.00th=[17695], 60.00th=[18744], 00:15:34.095 | 70.00th=[24249], 80.00th=[28181], 90.00th=[42206], 95.00th=[45351], 00:15:34.095 | 99.00th=[80217], 99.50th=[85459], 99.90th=[93848], 99.95th=[93848], 00:15:34.095 | 99.99th=[93848] 00:15:34.095 write: IOPS=2906, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1007msec); 0 zone resets 00:15:34.095 slat (usec): min=3, max=32432, avg=176.36, stdev=1090.37 00:15:34.095 clat (usec): min=6464, max=99679, avg=23044.31, stdev=16965.51 00:15:34.095 lat (usec): min=7380, max=99690, avg=23220.66, stdev=17075.78 00:15:34.095 clat percentiles (msec): 00:15:34.095 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 13], 20.00th=[ 15], 00:15:34.095 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 19], 00:15:34.095 | 70.00th=[ 20], 80.00th=[ 25], 90.00th=[ 44], 95.00th=[ 64], 00:15:34.095 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 101], 00:15:34.095 | 99.99th=[ 101] 00:15:34.095 bw ( KiB/s): min=10696, max=11704, per=21.52%, avg=11200.00, stdev=712.76, samples=2 00:15:34.095 iops : min= 2674, max= 2926, avg=2800.00, stdev=178.19, samples=2 00:15:34.095 lat (msec) : 10=3.94%, 20=66.98%, 50=22.85%, 100=6.23% 00:15:34.095 cpu : usr=3.48%, sys=5.47%, ctx=307, majf=0, minf=19 00:15:34.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:34.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.095 issued rwts: total=2560,2927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.095 00:15:34.095 Run status group 0 (all jobs): 00:15:34.095 READ: bw=45.8MiB/s (48.1MB/s), 9.93MiB/s-12.8MiB/s (10.4MB/s-13.4MB/s), io=46.4MiB (48.6MB), run=1003-1012msec 00:15:34.095 WRITE: bw=50.8MiB/s (53.3MB/s), 11.4MiB/s-14.0MiB/s (11.9MB/s-14.6MB/s), io=51.4MiB (53.9MB), run=1003-1012msec 00:15:34.095 00:15:34.095 Disk stats (read/write): 00:15:34.095 nvme0n1: ios=2893/3072, merge=0/0, ticks=16376/16232, in_queue=32608, util=86.77% 00:15:34.095 nvme0n2: ios=2767/3072, merge=0/0, ticks=25582/26322, in_queue=51904, util=86.88% 00:15:34.095 nvme0n3: ios=2209/2560, merge=0/0, ticks=20928/32390, in_queue=53318, util=97.38% 00:15:34.095 nvme0n4: ios=2105/2224, merge=0/0, ticks=22435/28699, in_queue=51134, util=97.47% 00:15:34.095 22:56:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:34.095 22:56:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=812353 00:15:34.095 22:56:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:34.095 22:56:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:34.095 [global] 00:15:34.095 thread=1 00:15:34.095 invalidate=1 00:15:34.095 rw=read 00:15:34.095 time_based=1 00:15:34.095 runtime=10 00:15:34.095 ioengine=libaio 00:15:34.095 direct=1 00:15:34.095 bs=4096 00:15:34.095 iodepth=1 00:15:34.095 norandommap=1 00:15:34.095 numjobs=1 00:15:34.095 00:15:34.095 [job0] 00:15:34.095 filename=/dev/nvme0n1 00:15:34.095 [job1] 00:15:34.095 filename=/dev/nvme0n2 00:15:34.095 [job2] 00:15:34.095 filename=/dev/nvme0n3 00:15:34.095 [job3] 00:15:34.095 filename=/dev/nvme0n4 00:15:34.095 Could not set queue depth (nvme0n1) 00:15:34.095 Could not set queue depth (nvme0n2) 00:15:34.095 Could not set queue depth (nvme0n3) 00:15:34.095 Could not set queue depth (nvme0n4) 00:15:34.095 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.095 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.095 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.095 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.095 fio-3.35 00:15:34.095 Starting 4 threads 00:15:37.380 22:56:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:37.380 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=19685376, buflen=4096 00:15:37.380 fio: pid=812446, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:37.380 22:56:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:37.946 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.946 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:37.946 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=25837568, buflen=4096 00:15:37.946 fio: pid=812444, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:38.512 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:38.512 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:38.512 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=27738112, buflen=4096 00:15:38.512 fio: pid=812442, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:39.079 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=17719296, buflen=4096 00:15:39.079 fio: pid=812443, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:15:39.079 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.079 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:39.079 00:15:39.079 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=812442: Mon Jul 22 22:56:15 2024 00:15:39.079 read: IOPS=1653, BW=6613KiB/s (6772kB/s)(26.5MiB/4096msec) 00:15:39.079 slat (usec): min=5, max=12878, avg=22.62, stdev=182.71 00:15:39.079 clat (usec): min=230, max=41941, avg=574.06, stdev=2880.65 00:15:39.079 lat (usec): min=236, max=41956, avg=596.68, stdev=2886.96 00:15:39.079 clat percentiles (usec): 00:15:39.079 | 1.00th=[ 253], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 306], 00:15:39.079 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 375], 00:15:39.079 | 70.00th=[ 404], 80.00th=[ 429], 90.00th=[ 486], 95.00th=[ 506], 00:15:39.079 | 99.00th=[ 701], 99.50th=[40109], 99.90th=[41157], 99.95th=[41681], 00:15:39.079 | 99.99th=[41681] 00:15:39.079 bw ( KiB/s): min= 107, max= 9464, per=35.71%, avg=6769.38, stdev=4105.26, samples=8 00:15:39.079 iops : min= 26, max= 2366, avg=1692.25, stdev=1026.49, samples=8 00:15:39.079 lat (usec) : 250=0.68%, 500=93.15%, 750=5.42%, 1000=0.19% 00:15:39.079 lat (msec) : 2=0.03%, 10=0.01%, 50=0.50% 00:15:39.079 cpu : usr=1.32%, sys=3.98%, ctx=6775, majf=0, minf=1 00:15:39.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.079 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.079 issued rwts: total=6773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.079 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=812443: Mon Jul 22 22:56:15 2024 00:15:39.079 read: IOPS=923, BW=3692KiB/s (3781kB/s)(16.9MiB/4687msec) 00:15:39.079 slat (usec): min=6, max=10776, avg=22.45, stdev=214.61 00:15:39.079 clat (usec): min=224, max=61565, avg=1056.85, stdev=5374.47 00:15:39.079 lat (usec): min=232, max=61600, avg=1077.68, stdev=5407.79 00:15:39.079 clat percentiles (usec): 00:15:39.079 | 1.00th=[ 255], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 306], 00:15:39.079 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:15:39.079 | 70.00th=[ 347], 80.00th=[ 371], 90.00th=[ 445], 95.00th=[ 506], 00:15:39.079 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:15:39.079 | 99.99th=[61604] 00:15:39.079 bw ( KiB/s): min= 92, max=10992, per=20.24%, avg=3836.00, stdev=4378.48, samples=9 00:15:39.079 iops : min= 23, max= 2748, avg=959.00, stdev=1094.62, samples=9 00:15:39.079 lat (usec) : 250=0.62%, 500=93.90%, 750=3.67% 00:15:39.079 lat (msec) : 4=0.02%, 10=0.02%, 50=1.69%, 100=0.05% 00:15:39.079 cpu : usr=0.66%, sys=2.01%, ctx=4332, majf=0, minf=1 00:15:39.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.079 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.079 issued rwts: total=4327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.079 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=812444: Mon Jul 22 22:56:15 2024 00:15:39.079 read: IOPS=1757, BW=7030KiB/s (7199kB/s)(24.6MiB/3589msec) 00:15:39.079 slat (usec): min=5, max=14820, avg=21.46, stdev=186.63 00:15:39.079 clat (usec): min=253, max=41495, avg=538.45, stdev=2342.02 00:15:39.079 lat (usec): min=261, max=41527, avg=559.91, stdev=2350.16 00:15:39.079 clat percentiles (usec): 00:15:39.080 | 1.00th=[ 281], 5.00th=[ 306], 10.00th=[ 326], 20.00th=[ 343], 00:15:39.080 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 392], 60.00th=[ 416], 00:15:39.080 | 70.00th=[ 441], 80.00th=[ 469], 90.00th=[ 502], 95.00th=[ 529], 00:15:39.080 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:15:39.080 | 99.99th=[41681] 00:15:39.080 bw ( KiB/s): min= 315, max= 9176, per=38.01%, avg=7206.14, stdev=3219.55, samples=7 00:15:39.080 iops : min= 78, max= 2294, avg=1801.43, stdev=805.15, samples=7 00:15:39.080 lat (usec) : 500=89.48%, 750=10.13%, 1000=0.03% 00:15:39.080 lat (msec) : 2=0.02%, 50=0.33% 00:15:39.080 cpu : usr=1.59%, sys=3.76%, ctx=6310, majf=0, minf=1 00:15:39.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.080 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.080 issued rwts: total=6309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.080 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=812446: Mon Jul 22 22:56:15 2024 00:15:39.080 read: IOPS=1634, BW=6537KiB/s (6693kB/s)(18.8MiB/2941msec) 00:15:39.080 slat (nsec): min=5463, max=67411, avg=17073.41, stdev=7524.59 00:15:39.080 clat (usec): min=228, max=41503, avg=588.67, stdev=3105.08 00:15:39.080 lat (usec): min=234, max=41523, avg=605.74, stdev=3105.66 00:15:39.080 clat percentiles (usec): 00:15:39.080 | 1.00th=[ 255], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:15:39.080 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 343], 00:15:39.080 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 437], 95.00th=[ 465], 00:15:39.080 | 99.00th=[ 619], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:15:39.080 | 99.99th=[41681] 00:15:39.080 bw ( KiB/s): min= 672, max=11384, per=29.82%, avg=5652.80, stdev=4345.24, samples=5 00:15:39.080 iops : min= 168, max= 2846, avg=1413.20, stdev=1086.31, samples=5 00:15:39.080 lat (usec) : 250=0.40%, 500=96.59%, 750=2.21%, 1000=0.10% 00:15:39.080 lat (msec) : 2=0.04%, 4=0.04%, 20=0.02%, 50=0.58% 00:15:39.080 cpu : usr=1.60%, sys=3.06%, ctx=4809, majf=0, minf=1 00:15:39.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.080 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.080 issued rwts: total=4807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.080 00:15:39.080 Run status group 0 (all jobs): 00:15:39.080 READ: bw=18.5MiB/s (19.4MB/s), 3692KiB/s-7030KiB/s (3781kB/s-7199kB/s), io=86.8MiB (91.0MB), run=2941-4687msec 00:15:39.080 00:15:39.080 Disk stats (read/write): 00:15:39.080 nvme0n1: ios=6770/0, merge=0/0, ticks=3710/0, in_queue=3710, util=94.41% 00:15:39.080 nvme0n2: ios=4360/0, merge=0/0, ticks=5248/0, in_queue=5248, util=99.58% 00:15:39.080 nvme0n3: ios=6307/0, merge=0/0, ticks=3254/0, in_queue=3254, util=96.46% 00:15:39.080 nvme0n4: ios=4401/0, merge=0/0, ticks=3917/0, in_queue=3917, util=99.76% 00:15:39.647 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.647 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:40.214 22:56:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:40.214 22:56:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:40.781 22:56:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:40.781 22:56:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:41.040 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:41.040 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:41.610 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:41.610 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 812353 00:15:41.610 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:41.610 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:41.872 nvmf hotplug test: fio failed as expected 00:15:41.872 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.130 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.130 rmmod nvme_tcp 00:15:42.389 rmmod nvme_fabrics 00:15:42.389 rmmod nvme_keyring 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 809920 ']' 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 809920 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 809920 ']' 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 809920 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:42.389 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:42.390 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809920 00:15:42.390 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:42.390 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:42.390 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809920' 00:15:42.390 killing process with pid 809920 00:15:42.390 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 809920 00:15:42.390 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 809920 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.650 22:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.190 00:15:45.190 real 0m32.488s 00:15:45.190 user 1m59.700s 00:15:45.190 sys 0m9.639s 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.190 ************************************ 00:15:45.190 END TEST nvmf_fio_target 00:15:45.190 ************************************ 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:45.190 ************************************ 00:15:45.190 START TEST nvmf_bdevio 00:15:45.190 ************************************ 00:15:45.190 22:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:45.190 * Looking for test storage... 00:15:45.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.190 22:56:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:47.784 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:47.784 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:47.784 Found net devices under 0000:84:00.0: cvl_0_0 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:47.784 Found net devices under 0000:84:00.1: cvl_0_1 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.784 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:48.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:15:48.043 00:15:48.043 --- 10.0.0.2 ping statistics --- 00:15:48.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.043 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:15:48.043 00:15:48.043 --- 10.0.0.1 ping statistics --- 00:15:48.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.043 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.043 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=815492 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 815492 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 815492 ']' 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.044 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.044 [2024-07-22 22:56:24.343379] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:15:48.044 [2024-07-22 22:56:24.343491] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.303 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.303 [2024-07-22 22:56:24.471280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.564 [2024-07-22 22:56:24.641409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.564 [2024-07-22 22:56:24.641500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.564 [2024-07-22 22:56:24.641537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.564 [2024-07-22 22:56:24.641567] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.564 [2024-07-22 22:56:24.641594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.564 [2024-07-22 22:56:24.641710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:48.564 [2024-07-22 22:56:24.641789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:48.564 [2024-07-22 22:56:24.641876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:48.564 [2024-07-22 22:56:24.641882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.564 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.564 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:48.564 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.564 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.564 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.824 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.825 [2024-07-22 22:56:24.905146] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.825 Malloc0 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:48.825 [2024-07-22 22:56:24.993154] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:48.825 22:56:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:48.825 { 00:15:48.825 "params": { 00:15:48.825 "name": "Nvme$subsystem", 00:15:48.825 "trtype": "$TEST_TRANSPORT", 00:15:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.825 "adrfam": "ipv4", 00:15:48.825 "trsvcid": "$NVMF_PORT", 00:15:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.825 "hdgst": ${hdgst:-false}, 00:15:48.825 "ddgst": ${ddgst:-false} 00:15:48.825 }, 00:15:48.825 "method": "bdev_nvme_attach_controller" 00:15:48.825 } 00:15:48.825 EOF 00:15:48.825 )") 00:15:48.825 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:48.825 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:48.825 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:48.825 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:48.825 "params": { 00:15:48.825 "name": "Nvme1", 00:15:48.825 "trtype": "tcp", 00:15:48.825 "traddr": "10.0.0.2", 00:15:48.825 "adrfam": "ipv4", 00:15:48.825 "trsvcid": "4420", 00:15:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.825 "hdgst": false, 00:15:48.825 "ddgst": false 00:15:48.825 }, 00:15:48.825 "method": "bdev_nvme_attach_controller" 00:15:48.825 }' 00:15:48.825 [2024-07-22 22:56:25.091951] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:15:48.825 [2024-07-22 22:56:25.092117] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815525 ] 00:15:49.084 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.084 [2024-07-22 22:56:25.233279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:49.084 [2024-07-22 22:56:25.389878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.084 [2024-07-22 22:56:25.391336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.084 [2024-07-22 22:56:25.391361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.344 I/O targets: 00:15:49.344 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:49.344 00:15:49.344 00:15:49.344 CUnit - A unit testing framework for C - Version 2.1-3 00:15:49.344 http://cunit.sourceforge.net/ 00:15:49.344 00:15:49.344 00:15:49.344 Suite: bdevio tests on: Nvme1n1 00:15:49.344 Test: blockdev write read block ...passed 00:15:49.604 Test: blockdev write zeroes read block ...passed 00:15:49.604 Test: blockdev write zeroes read no split ...passed 00:15:49.604 Test: blockdev write zeroes read split ...passed 00:15:49.604 Test: blockdev write zeroes read split partial ...passed 00:15:49.604 Test: blockdev reset ...[2024-07-22 22:56:25.811351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:49.604 [2024-07-22 22:56:25.811503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453020 (9): Bad file descriptor 00:15:49.864 [2024-07-22 22:56:25.958528] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:49.864 passed 00:15:49.864 Test: blockdev write read 8 blocks ...passed 00:15:49.864 Test: blockdev write read size > 128k ...passed 00:15:49.864 Test: blockdev write read invalid size ...passed 00:15:49.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.864 Test: blockdev write read max offset ...passed 00:15:49.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.864 Test: blockdev writev readv 8 blocks ...passed 00:15:49.864 Test: blockdev writev readv 30 x 1block ...passed 00:15:50.124 Test: blockdev writev readv block ...passed 00:15:50.124 Test: blockdev writev readv size > 128k ...passed 00:15:50.124 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:50.124 Test: blockdev comparev and writev ...[2024-07-22 22:56:26.261774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.261856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.261915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.261957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.262702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.262764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.262820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.262860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.263648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.263708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.263762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.263802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.264584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.264661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.264716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:50.124 [2024-07-22 22:56:26.264773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:50.124 passed 00:15:50.124 Test: blockdev nvme passthru rw ...passed 00:15:50.124 Test: blockdev nvme passthru vendor specific ...[2024-07-22 22:56:26.350039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:50.124 [2024-07-22 22:56:26.350107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.350579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:50.124 [2024-07-22 22:56:26.350639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.351077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:50.124 [2024-07-22 22:56:26.351135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:50.124 [2024-07-22 22:56:26.351605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:50.124 [2024-07-22 22:56:26.351665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:50.124 passed 00:15:50.124 Test: blockdev nvme admin passthru ...passed 00:15:50.124 Test: blockdev copy ...passed 00:15:50.124 00:15:50.124 Run Summary: Type Total Ran Passed Failed Inactive 00:15:50.124 suites 1 1 n/a 0 0 00:15:50.124 tests 23 23 23 0 0 00:15:50.124 asserts 152 152 152 0 n/a 00:15:50.124 00:15:50.124 Elapsed time = 1.608 seconds 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.384 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.643 rmmod nvme_tcp 00:15:50.643 rmmod nvme_fabrics 00:15:50.643 rmmod nvme_keyring 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 815492 ']' 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 815492 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 815492 ']' 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 815492 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 815492 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 815492' 00:15:50.643 killing process with pid 815492 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 815492 00:15:50.643 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 815492 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.903 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.445 00:15:53.445 real 0m8.306s 00:15:53.445 user 0m13.821s 00:15:53.445 sys 0m3.368s 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:53.445 ************************************ 00:15:53.445 END TEST nvmf_bdevio 00:15:53.445 ************************************ 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:53.445 00:15:53.445 real 4m54.056s 00:15:53.445 user 12m36.317s 00:15:53.445 sys 1m33.077s 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:53.445 ************************************ 00:15:53.445 END TEST nvmf_target_core 00:15:53.445 ************************************ 00:15:53.445 22:56:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.445 22:56:29 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:53.445 22:56:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.445 22:56:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.445 22:56:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.445 ************************************ 00:15:53.445 START TEST nvmf_target_extra 00:15:53.445 ************************************ 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:53.445 * Looking for test storage... 00:15:53.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.445 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.446 ************************************ 00:15:53.446 START TEST nvmf_example 00:15:53.446 ************************************ 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:53.446 * Looking for test storage... 00:15:53.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:53.446 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.447 22:56:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:56.752 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:56.752 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.752 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:56.752 Found net devices under 0000:84:00.0: cvl_0_0 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:56.753 Found net devices under 0000:84:00.1: cvl_0_1 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:15:56.753 00:15:56.753 --- 10.0.0.2 ping statistics --- 00:15:56.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.753 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:15:56.753 00:15:56.753 --- 10.0.0.1 ping statistics --- 00:15:56.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.753 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.753 22:56:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=817913 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 817913 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 817913 ']' 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.753 22:56:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:57.012 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:58.388 22:56:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:58.388 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.592 Initializing NVMe Controllers 00:16:10.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:10.592 Initialization complete. Launching workers. 00:16:10.592 ======================================================== 00:16:10.592 Latency(us) 00:16:10.592 Device Information : IOPS MiB/s Average min max 00:16:10.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12408.70 48.47 5160.00 1226.16 16841.44 00:16:10.592 ======================================================== 00:16:10.592 Total : 12408.70 48.47 5160.00 1226.16 16841.44 00:16:10.592 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.592 rmmod nvme_tcp 00:16:10.592 rmmod nvme_fabrics 00:16:10.592 rmmod nvme_keyring 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:16:10.592 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 817913 ']' 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 817913 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 817913 ']' 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 817913 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 817913 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 817913' 00:16:10.593 killing process with pid 817913 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 817913 00:16:10.593 22:56:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 817913 00:16:10.593 nvmf threads initialize successfully 00:16:10.593 bdev subsystem init successfully 00:16:10.593 created a nvmf target service 00:16:10.593 create targets's poll groups done 00:16:10.593 all subsystems of target started 00:16:10.593 nvmf target is running 00:16:10.593 all subsystems of target stopped 00:16:10.593 destroy targets's poll groups done 00:16:10.593 destroyed the nvmf target service 00:16:10.593 bdev subsystem finish successfully 00:16:10.593 nvmf threads destroy successfully 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.593 22:56:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:11.163 00:16:11.163 real 0m17.860s 00:16:11.163 user 0m47.921s 00:16:11.163 sys 0m4.501s 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:11.163 ************************************ 00:16:11.163 END TEST nvmf_example 00:16:11.163 ************************************ 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.163 ************************************ 00:16:11.163 START TEST nvmf_filesystem 00:16:11.163 ************************************ 00:16:11.163 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:11.425 * Looking for test storage... 00:16:11.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:11.426 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:11.427 #define SPDK_CONFIG_H 00:16:11.427 #define SPDK_CONFIG_APPS 1 00:16:11.427 #define SPDK_CONFIG_ARCH native 00:16:11.427 #undef SPDK_CONFIG_ASAN 00:16:11.427 #undef SPDK_CONFIG_AVAHI 00:16:11.427 #undef SPDK_CONFIG_CET 00:16:11.427 #define SPDK_CONFIG_COVERAGE 1 00:16:11.427 #define SPDK_CONFIG_CROSS_PREFIX 00:16:11.427 #undef SPDK_CONFIG_CRYPTO 00:16:11.427 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:11.427 #undef SPDK_CONFIG_CUSTOMOCF 00:16:11.427 #undef SPDK_CONFIG_DAOS 00:16:11.427 #define SPDK_CONFIG_DAOS_DIR 00:16:11.427 #define SPDK_CONFIG_DEBUG 1 00:16:11.427 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:11.427 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:16:11.427 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:16:11.427 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:11.427 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:11.427 #undef SPDK_CONFIG_DPDK_UADK 00:16:11.427 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:11.427 #define SPDK_CONFIG_EXAMPLES 1 00:16:11.427 #undef SPDK_CONFIG_FC 00:16:11.427 #define SPDK_CONFIG_FC_PATH 00:16:11.427 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:11.427 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:11.427 #undef SPDK_CONFIG_FUSE 00:16:11.427 #undef SPDK_CONFIG_FUZZER 00:16:11.427 #define SPDK_CONFIG_FUZZER_LIB 00:16:11.427 #undef SPDK_CONFIG_GOLANG 00:16:11.427 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:11.427 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:11.427 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:11.427 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:11.427 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:11.427 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:11.427 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:11.427 #define SPDK_CONFIG_IDXD 1 00:16:11.427 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:11.427 #undef SPDK_CONFIG_IPSEC_MB 00:16:11.427 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:11.427 #define SPDK_CONFIG_ISAL 1 00:16:11.427 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:11.427 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:11.427 #define SPDK_CONFIG_LIBDIR 00:16:11.427 #undef SPDK_CONFIG_LTO 00:16:11.427 #define SPDK_CONFIG_MAX_LCORES 128 00:16:11.427 #define SPDK_CONFIG_NVME_CUSE 1 00:16:11.427 #undef SPDK_CONFIG_OCF 00:16:11.427 #define SPDK_CONFIG_OCF_PATH 00:16:11.427 #define SPDK_CONFIG_OPENSSL_PATH 00:16:11.427 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:11.427 #define SPDK_CONFIG_PGO_DIR 00:16:11.427 #undef SPDK_CONFIG_PGO_USE 00:16:11.427 #define SPDK_CONFIG_PREFIX /usr/local 00:16:11.427 #undef SPDK_CONFIG_RAID5F 00:16:11.427 #undef SPDK_CONFIG_RBD 00:16:11.427 #define SPDK_CONFIG_RDMA 1 00:16:11.427 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:11.427 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:11.427 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:11.427 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:11.427 #define SPDK_CONFIG_SHARED 1 00:16:11.427 #undef SPDK_CONFIG_SMA 00:16:11.427 #define SPDK_CONFIG_TESTS 1 00:16:11.427 #undef SPDK_CONFIG_TSAN 00:16:11.427 #define SPDK_CONFIG_UBLK 1 00:16:11.427 #define SPDK_CONFIG_UBSAN 1 00:16:11.427 #undef SPDK_CONFIG_UNIT_TESTS 00:16:11.427 #undef SPDK_CONFIG_URING 00:16:11.427 #define SPDK_CONFIG_URING_PATH 00:16:11.427 #undef SPDK_CONFIG_URING_ZNS 00:16:11.427 #undef SPDK_CONFIG_USDT 00:16:11.427 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:11.427 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:11.427 #define SPDK_CONFIG_VFIO_USER 1 00:16:11.427 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:11.427 #define SPDK_CONFIG_VHOST 1 00:16:11.427 #define SPDK_CONFIG_VIRTIO 1 00:16:11.427 #undef SPDK_CONFIG_VTUNE 00:16:11.427 #define SPDK_CONFIG_VTUNE_DIR 00:16:11.427 #define SPDK_CONFIG_WERROR 1 00:16:11.427 #define SPDK_CONFIG_WPDK_DIR 00:16:11.427 #undef SPDK_CONFIG_XNVME 00:16:11.427 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:11.427 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:16:11.428 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:16:11.429 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 819602 ]] 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 819602 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.MXEUL4 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.MXEUL4/tests/target /tmp/spdk.MXEUL4 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=37545226240 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083287552 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7538061312 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22531723264 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541643776 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=8994226176 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016659968 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22433792 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22541299712 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541643776 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=344064 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:16:11.430 * Looking for test storage... 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=37545226240 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:16:11.430 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=9752653824 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.431 22:56:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.773 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:14.774 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:14.774 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:14.774 Found net devices under 0000:84:00.0: cvl_0_0 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:14.774 Found net devices under 0000:84:00.1: cvl_0_1 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.774 22:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.774 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.774 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.774 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.774 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:16:15.035 00:16:15.035 --- 10.0.0.2 ping statistics --- 00:16:15.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.035 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:16:15.035 00:16:15.035 --- 10.0.0.1 ping statistics --- 00:16:15.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.035 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:15.035 ************************************ 00:16:15.035 START TEST nvmf_filesystem_no_in_capsule 00:16:15.035 ************************************ 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=821377 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 821377 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 821377 ']' 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.035 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.035 [2024-07-22 22:56:51.326899] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:16:15.035 [2024-07-22 22:56:51.327066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.295 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.295 [2024-07-22 22:56:51.478784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.554 [2024-07-22 22:56:51.635050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.554 [2024-07-22 22:56:51.635147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.554 [2024-07-22 22:56:51.635184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.554 [2024-07-22 22:56:51.635214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.554 [2024-07-22 22:56:51.635238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.554 [2024-07-22 22:56:51.635399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.554 [2024-07-22 22:56:51.635462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.554 [2024-07-22 22:56:51.635521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.555 [2024-07-22 22:56:51.635524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.555 [2024-07-22 22:56:51.825578] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.555 22:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.815 Malloc1 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.815 [2024-07-22 22:56:52.044137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:15.815 { 00:16:15.815 "name": "Malloc1", 00:16:15.815 "aliases": [ 00:16:15.815 "b8ccc8fb-4307-4cc2-b9c8-da27ec9e8742" 00:16:15.815 ], 00:16:15.815 "product_name": "Malloc disk", 00:16:15.815 "block_size": 512, 00:16:15.815 "num_blocks": 1048576, 00:16:15.815 "uuid": "b8ccc8fb-4307-4cc2-b9c8-da27ec9e8742", 00:16:15.815 "assigned_rate_limits": { 00:16:15.815 "rw_ios_per_sec": 0, 00:16:15.815 "rw_mbytes_per_sec": 0, 00:16:15.815 "r_mbytes_per_sec": 0, 00:16:15.815 "w_mbytes_per_sec": 0 00:16:15.815 }, 00:16:15.815 "claimed": true, 00:16:15.815 "claim_type": "exclusive_write", 00:16:15.815 "zoned": false, 00:16:15.815 "supported_io_types": { 00:16:15.815 "read": true, 00:16:15.815 "write": true, 00:16:15.815 "unmap": true, 00:16:15.815 "flush": true, 00:16:15.815 "reset": true, 00:16:15.815 "nvme_admin": false, 00:16:15.815 "nvme_io": false, 00:16:15.815 "nvme_io_md": false, 00:16:15.815 "write_zeroes": true, 00:16:15.815 "zcopy": true, 00:16:15.815 "get_zone_info": false, 00:16:15.815 "zone_management": false, 00:16:15.815 "zone_append": false, 00:16:15.815 "compare": false, 00:16:15.815 "compare_and_write": false, 00:16:15.815 "abort": true, 00:16:15.815 "seek_hole": false, 00:16:15.815 "seek_data": false, 00:16:15.815 "copy": true, 00:16:15.815 "nvme_iov_md": false 00:16:15.815 }, 00:16:15.815 "memory_domains": [ 00:16:15.815 { 00:16:15.815 "dma_device_id": "system", 00:16:15.815 "dma_device_type": 1 00:16:15.815 }, 00:16:15.815 { 00:16:15.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.815 "dma_device_type": 2 00:16:15.815 } 00:16:15.815 ], 00:16:15.815 "driver_specific": {} 00:16:15.815 } 00:16:15.815 ]' 00:16:15.815 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:16.074 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:16:16.074 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:16.074 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:16:16.074 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:16:16.074 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:16:16.074 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:16.074 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.641 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:16.641 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:16:16.641 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.641 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:16.641 22:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:19.170 22:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:19.170 22:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:19.428 22:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:20.801 ************************************ 00:16:20.801 START TEST filesystem_ext4 00:16:20.801 ************************************ 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:20.801 mke2fs 1.46.5 (30-Dec-2021) 00:16:20.801 Discarding device blocks: 0/522240 done 00:16:20.801 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:20.801 Filesystem UUID: ed57e43f-f809-4fbd-aa1b-ef6b39aa8e10 00:16:20.801 Superblock backups stored on blocks: 00:16:20.801 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:20.801 00:16:20.801 Allocating group tables: 0/64 done 00:16:20.801 Writing inode tables: 0/64 done 00:16:20.801 Creating journal (8192 blocks): done 00:16:20.801 Writing superblocks and filesystem accounting information: 0/64 done 00:16:20.801 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:16:20.801 22:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 821377 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:21.059 00:16:21.059 real 0m0.520s 00:16:21.059 user 0m0.032s 00:16:21.059 sys 0m0.079s 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:21.059 ************************************ 00:16:21.059 END TEST filesystem_ext4 00:16:21.059 ************************************ 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.059 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:21.318 ************************************ 00:16:21.318 START TEST filesystem_btrfs 00:16:21.318 ************************************ 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:16:21.318 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:21.576 btrfs-progs v6.6.2 00:16:21.576 See https://btrfs.readthedocs.io for more information. 00:16:21.576 00:16:21.576 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:21.576 NOTE: several default settings have changed in version 5.15, please make sure 00:16:21.576 this does not affect your deployments: 00:16:21.576 - DUP for metadata (-m dup) 00:16:21.576 - enabled no-holes (-O no-holes) 00:16:21.576 - enabled free-space-tree (-R free-space-tree) 00:16:21.576 00:16:21.576 Label: (null) 00:16:21.576 UUID: 1abf0591-31c2-4861-b7e3-abf5e6f7234b 00:16:21.576 Node size: 16384 00:16:21.576 Sector size: 4096 00:16:21.576 Filesystem size: 510.00MiB 00:16:21.576 Block group profiles: 00:16:21.576 Data: single 8.00MiB 00:16:21.576 Metadata: DUP 32.00MiB 00:16:21.576 System: DUP 8.00MiB 00:16:21.576 SSD detected: yes 00:16:21.576 Zoned device: no 00:16:21.576 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:16:21.576 Runtime features: free-space-tree 00:16:21.576 Checksum: crc32c 00:16:21.576 Number of devices: 1 00:16:21.576 Devices: 00:16:21.576 ID SIZE PATH 00:16:21.576 1 510.00MiB /dev/nvme0n1p1 00:16:21.576 00:16:21.576 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:16:21.576 22:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:21.833 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 821377 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:21.834 00:16:21.834 real 0m0.718s 00:16:21.834 user 0m0.026s 00:16:21.834 sys 0m0.146s 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:21.834 ************************************ 00:16:21.834 END TEST filesystem_btrfs 00:16:21.834 ************************************ 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.834 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:22.091 ************************************ 00:16:22.091 START TEST filesystem_xfs 00:16:22.091 ************************************ 00:16:22.091 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:16:22.092 22:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:22.092 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:22.092 = sectsz=512 attr=2, projid32bit=1 00:16:22.092 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:22.092 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:22.092 data = bsize=4096 blocks=130560, imaxpct=25 00:16:22.092 = sunit=0 swidth=0 blks 00:16:22.092 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:22.092 log =internal log bsize=4096 blocks=16384, version=2 00:16:22.092 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:22.092 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:23.024 Discarding blocks...Done. 00:16:23.024 22:56:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:16:23.024 22:56:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 821377 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:25.552 00:16:25.552 real 0m3.619s 00:16:25.552 user 0m0.020s 00:16:25.552 sys 0m0.069s 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:25.552 ************************************ 00:16:25.552 END TEST filesystem_xfs 00:16:25.552 ************************************ 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:16:25.552 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:25.811 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:25.811 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.811 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.811 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:16:25.811 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:25.811 22:57:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 821377 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 821377 ']' 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 821377 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 821377 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 821377' 00:16:25.811 killing process with pid 821377 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 821377 00:16:25.811 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 821377 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:26.378 00:16:26.378 real 0m11.435s 00:16:26.378 user 0m43.408s 00:16:26.378 sys 0m1.965s 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:26.378 ************************************ 00:16:26.378 END TEST nvmf_filesystem_no_in_capsule 00:16:26.378 ************************************ 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.378 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:26.638 ************************************ 00:16:26.638 START TEST nvmf_filesystem_in_capsule 00:16:26.638 ************************************ 00:16:26.638 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:16:26.638 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:26.638 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:26.638 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.638 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:26.638 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=822913 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 822913 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 822913 ']' 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.639 22:57:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:26.639 [2024-07-22 22:57:02.841915] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:16:26.639 [2024-07-22 22:57:02.842084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.639 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.898 [2024-07-22 22:57:02.996087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.898 [2024-07-22 22:57:03.149708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.898 [2024-07-22 22:57:03.149814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.898 [2024-07-22 22:57:03.149852] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.898 [2024-07-22 22:57:03.149883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.898 [2024-07-22 22:57:03.149908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.898 [2024-07-22 22:57:03.150078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.898 [2024-07-22 22:57:03.150138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.898 [2024-07-22 22:57:03.150218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.898 [2024-07-22 22:57:03.150225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.163 [2024-07-22 22:57:03.334642] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.163 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.423 Malloc1 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.423 [2024-07-22 22:57:03.559925] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:27.423 { 00:16:27.423 "name": "Malloc1", 00:16:27.423 "aliases": [ 00:16:27.423 "2d336284-6a38-43df-ac57-873f593c54cc" 00:16:27.423 ], 00:16:27.423 "product_name": "Malloc disk", 00:16:27.423 "block_size": 512, 00:16:27.423 "num_blocks": 1048576, 00:16:27.423 "uuid": "2d336284-6a38-43df-ac57-873f593c54cc", 00:16:27.423 "assigned_rate_limits": { 00:16:27.423 "rw_ios_per_sec": 0, 00:16:27.423 "rw_mbytes_per_sec": 0, 00:16:27.423 "r_mbytes_per_sec": 0, 00:16:27.423 "w_mbytes_per_sec": 0 00:16:27.423 }, 00:16:27.423 "claimed": true, 00:16:27.423 "claim_type": "exclusive_write", 00:16:27.423 "zoned": false, 00:16:27.423 "supported_io_types": { 00:16:27.423 "read": true, 00:16:27.423 "write": true, 00:16:27.423 "unmap": true, 00:16:27.423 "flush": true, 00:16:27.423 "reset": true, 00:16:27.423 "nvme_admin": false, 00:16:27.423 "nvme_io": false, 00:16:27.423 "nvme_io_md": false, 00:16:27.423 "write_zeroes": true, 00:16:27.423 "zcopy": true, 00:16:27.423 "get_zone_info": false, 00:16:27.423 "zone_management": false, 00:16:27.423 "zone_append": false, 00:16:27.423 "compare": false, 00:16:27.423 "compare_and_write": false, 00:16:27.423 "abort": true, 00:16:27.423 "seek_hole": false, 00:16:27.423 "seek_data": false, 00:16:27.423 "copy": true, 00:16:27.423 "nvme_iov_md": false 00:16:27.423 }, 00:16:27.423 "memory_domains": [ 00:16:27.423 { 00:16:27.423 "dma_device_id": "system", 00:16:27.423 "dma_device_type": 1 00:16:27.423 }, 00:16:27.423 { 00:16:27.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.423 "dma_device_type": 2 00:16:27.423 } 00:16:27.423 ], 00:16:27.423 "driver_specific": {} 00:16:27.423 } 00:16:27.423 ]' 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:27.423 22:57:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.357 22:57:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.357 22:57:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.357 22:57:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.357 22:57:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:28.357 22:57:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:30.258 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:30.516 22:57:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:31.083 22:57:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:32.015 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:32.015 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:32.273 ************************************ 00:16:32.273 START TEST filesystem_in_capsule_ext4 00:16:32.273 ************************************ 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:16:32.273 22:57:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:32.273 mke2fs 1.46.5 (30-Dec-2021) 00:16:32.273 Discarding device blocks: 0/522240 done 00:16:32.273 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:32.273 Filesystem UUID: 46fae9a4-1b1e-48ec-8588-120ab6988626 00:16:32.273 Superblock backups stored on blocks: 00:16:32.273 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:32.273 00:16:32.273 Allocating group tables: 0/64 done 00:16:32.273 Writing inode tables: 0/64 done 00:16:35.604 Creating journal (8192 blocks): done 00:16:35.604 Writing superblocks and filesystem accounting information: 0/64 done 00:16:35.604 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 822913 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:35.604 00:16:35.604 real 0m3.180s 00:16:35.604 user 0m0.027s 00:16:35.604 sys 0m0.067s 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:35.604 ************************************ 00:16:35.604 END TEST filesystem_in_capsule_ext4 00:16:35.604 ************************************ 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:35.604 ************************************ 00:16:35.604 START TEST filesystem_in_capsule_btrfs 00:16:35.604 ************************************ 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:16:35.604 22:57:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:35.863 btrfs-progs v6.6.2 00:16:35.863 See https://btrfs.readthedocs.io for more information. 00:16:35.863 00:16:35.863 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:35.863 NOTE: several default settings have changed in version 5.15, please make sure 00:16:35.863 this does not affect your deployments: 00:16:35.863 - DUP for metadata (-m dup) 00:16:35.863 - enabled no-holes (-O no-holes) 00:16:35.863 - enabled free-space-tree (-R free-space-tree) 00:16:35.863 00:16:35.863 Label: (null) 00:16:35.863 UUID: 594659a6-9f60-41db-853e-fd00fc148dd5 00:16:35.863 Node size: 16384 00:16:35.863 Sector size: 4096 00:16:35.863 Filesystem size: 510.00MiB 00:16:35.863 Block group profiles: 00:16:35.863 Data: single 8.00MiB 00:16:35.863 Metadata: DUP 32.00MiB 00:16:35.863 System: DUP 8.00MiB 00:16:35.863 SSD detected: yes 00:16:35.863 Zoned device: no 00:16:35.863 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:16:35.863 Runtime features: free-space-tree 00:16:35.863 Checksum: crc32c 00:16:35.863 Number of devices: 1 00:16:35.863 Devices: 00:16:35.863 ID SIZE PATH 00:16:35.863 1 510.00MiB /dev/nvme0n1p1 00:16:35.863 00:16:35.863 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:16:35.863 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:36.430 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:36.430 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:36.430 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:36.430 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:36.430 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:36.430 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 822913 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:36.688 00:16:36.688 real 0m1.201s 00:16:36.688 user 0m0.024s 00:16:36.688 sys 0m0.147s 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:36.688 ************************************ 00:16:36.688 END TEST filesystem_in_capsule_btrfs 00:16:36.688 ************************************ 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:36.688 ************************************ 00:16:36.688 START TEST filesystem_in_capsule_xfs 00:16:36.688 ************************************ 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:16:36.688 22:57:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:36.947 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:36.947 = sectsz=512 attr=2, projid32bit=1 00:16:36.947 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:36.947 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:36.947 data = bsize=4096 blocks=130560, imaxpct=25 00:16:36.947 = sunit=0 swidth=0 blks 00:16:36.947 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:36.947 log =internal log bsize=4096 blocks=16384, version=2 00:16:36.947 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:36.947 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:37.880 Discarding blocks...Done. 00:16:37.880 22:57:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:16:37.880 22:57:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 822913 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:40.406 00:16:40.406 real 0m3.708s 00:16:40.406 user 0m0.025s 00:16:40.406 sys 0m0.078s 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:40.406 ************************************ 00:16:40.406 END TEST filesystem_in_capsule_xfs 00:16:40.406 ************************************ 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:16:40.406 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:40.663 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:40.663 22:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.921 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.921 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:16:40.921 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:40.921 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.921 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:40.921 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.921 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 822913 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 822913 ']' 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 822913 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 822913 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 822913' 00:16:40.922 killing process with pid 822913 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 822913 00:16:40.922 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 822913 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:41.489 00:16:41.489 real 0m14.985s 00:16:41.489 user 0m57.291s 00:16:41.489 sys 0m2.203s 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:41.489 ************************************ 00:16:41.489 END TEST nvmf_filesystem_in_capsule 00:16:41.489 ************************************ 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.489 rmmod nvme_tcp 00:16:41.489 rmmod nvme_fabrics 00:16:41.489 rmmod nvme_keyring 00:16:41.489 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.750 22:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.656 22:57:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.656 00:16:43.656 real 0m32.430s 00:16:43.656 user 1m41.898s 00:16:43.656 sys 0m7.011s 00:16:43.656 22:57:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.656 22:57:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:43.656 ************************************ 00:16:43.656 END TEST nvmf_filesystem 00:16:43.656 ************************************ 00:16:43.656 22:57:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:43.657 22:57:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:43.657 22:57:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.657 22:57:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.657 22:57:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.657 ************************************ 00:16:43.657 START TEST nvmf_target_discovery 00:16:43.657 ************************************ 00:16:43.657 22:57:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:43.917 * Looking for test storage... 00:16:43.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.917 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.918 22:57:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:47.213 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:47.213 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:47.213 Found net devices under 0000:84:00.0: cvl_0_0 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:47.213 Found net devices under 0000:84:00.1: cvl_0_1 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.213 22:57:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.213 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.213 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.213 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.213 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:16:47.214 00:16:47.214 --- 10.0.0.2 ping statistics --- 00:16:47.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.214 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:16:47.214 00:16:47.214 --- 10.0.0.1 ping statistics --- 00:16:47.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.214 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=827286 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 827286 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 827286 ']' 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.214 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.214 [2024-07-22 22:57:23.306638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:16:47.214 [2024-07-22 22:57:23.306798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.214 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.214 [2024-07-22 22:57:23.456007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.473 [2024-07-22 22:57:23.611954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.473 [2024-07-22 22:57:23.612053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.473 [2024-07-22 22:57:23.612089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.473 [2024-07-22 22:57:23.612120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.473 [2024-07-22 22:57:23.612147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.473 [2024-07-22 22:57:23.612320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.473 [2024-07-22 22:57:23.612370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.473 [2024-07-22 22:57:23.612429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.473 [2024-07-22 22:57:23.612433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 [2024-07-22 22:57:23.833787] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 Null1 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 [2024-07-22 22:57:23.879103] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 Null2 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 Null3 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.732 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 Null4 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.733 22:57:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:16:47.991 00:16:47.991 Discovery Log Number of Records 6, Generation counter 6 00:16:47.991 =====Discovery Log Entry 0====== 00:16:47.991 trtype: tcp 00:16:47.991 adrfam: ipv4 00:16:47.991 subtype: current discovery subsystem 00:16:47.991 treq: not required 00:16:47.991 portid: 0 00:16:47.991 trsvcid: 4420 00:16:47.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:47.991 traddr: 10.0.0.2 00:16:47.991 eflags: explicit discovery connections, duplicate discovery information 00:16:47.991 sectype: none 00:16:47.991 =====Discovery Log Entry 1====== 00:16:47.991 trtype: tcp 00:16:47.991 adrfam: ipv4 00:16:47.991 subtype: nvme subsystem 00:16:47.992 treq: not required 00:16:47.992 portid: 0 00:16:47.992 trsvcid: 4420 00:16:47.992 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:47.992 traddr: 10.0.0.2 00:16:47.992 eflags: none 00:16:47.992 sectype: none 00:16:47.992 =====Discovery Log Entry 2====== 00:16:47.992 trtype: tcp 00:16:47.992 adrfam: ipv4 00:16:47.992 subtype: nvme subsystem 00:16:47.992 treq: not required 00:16:47.992 portid: 0 00:16:47.992 trsvcid: 4420 00:16:47.992 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:47.992 traddr: 10.0.0.2 00:16:47.992 eflags: none 00:16:47.992 sectype: none 00:16:47.992 =====Discovery Log Entry 3====== 00:16:47.992 trtype: tcp 00:16:47.992 adrfam: ipv4 00:16:47.992 subtype: nvme subsystem 00:16:47.992 treq: not required 00:16:47.992 portid: 0 00:16:47.992 trsvcid: 4420 00:16:47.992 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:47.992 traddr: 10.0.0.2 00:16:47.992 eflags: none 00:16:47.992 sectype: none 00:16:47.992 =====Discovery Log Entry 4====== 00:16:47.992 trtype: tcp 00:16:47.992 adrfam: ipv4 00:16:47.992 subtype: nvme subsystem 00:16:47.992 treq: not required 00:16:47.992 portid: 0 00:16:47.992 trsvcid: 4420 00:16:47.992 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:47.992 traddr: 10.0.0.2 00:16:47.992 eflags: none 00:16:47.992 sectype: none 00:16:47.992 =====Discovery Log Entry 5====== 00:16:47.992 trtype: tcp 00:16:47.992 adrfam: ipv4 00:16:47.992 subtype: discovery subsystem referral 00:16:47.992 treq: not required 00:16:47.992 portid: 0 00:16:47.992 trsvcid: 4430 00:16:47.992 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:47.992 traddr: 10.0.0.2 00:16:47.992 eflags: none 00:16:47.992 sectype: none 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:47.992 Perform nvmf subsystem discovery via RPC 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.992 [ 00:16:47.992 { 00:16:47.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:47.992 "subtype": "Discovery", 00:16:47.992 "listen_addresses": [ 00:16:47.992 { 00:16:47.992 "trtype": "TCP", 00:16:47.992 "adrfam": "IPv4", 00:16:47.992 "traddr": "10.0.0.2", 00:16:47.992 "trsvcid": "4420" 00:16:47.992 } 00:16:47.992 ], 00:16:47.992 "allow_any_host": true, 00:16:47.992 "hosts": [] 00:16:47.992 }, 00:16:47.992 { 00:16:47.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.992 "subtype": "NVMe", 00:16:47.992 "listen_addresses": [ 00:16:47.992 { 00:16:47.992 "trtype": "TCP", 00:16:47.992 "adrfam": "IPv4", 00:16:47.992 "traddr": "10.0.0.2", 00:16:47.992 "trsvcid": "4420" 00:16:47.992 } 00:16:47.992 ], 00:16:47.992 "allow_any_host": true, 00:16:47.992 "hosts": [], 00:16:47.992 "serial_number": "SPDK00000000000001", 00:16:47.992 "model_number": "SPDK bdev Controller", 00:16:47.992 "max_namespaces": 32, 00:16:47.992 "min_cntlid": 1, 00:16:47.992 "max_cntlid": 65519, 00:16:47.992 "namespaces": [ 00:16:47.992 { 00:16:47.992 "nsid": 1, 00:16:47.992 "bdev_name": "Null1", 00:16:47.992 "name": "Null1", 00:16:47.992 "nguid": "1CE7A21852714934BE46E87CD4C2E92A", 00:16:47.992 "uuid": "1ce7a218-5271-4934-be46-e87cd4c2e92a" 00:16:47.992 } 00:16:47.992 ] 00:16:47.992 }, 00:16:47.992 { 00:16:47.992 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:47.992 "subtype": "NVMe", 00:16:47.992 "listen_addresses": [ 00:16:47.992 { 00:16:47.992 "trtype": "TCP", 00:16:47.992 "adrfam": "IPv4", 00:16:47.992 "traddr": "10.0.0.2", 00:16:47.992 "trsvcid": "4420" 00:16:47.992 } 00:16:47.992 ], 00:16:47.992 "allow_any_host": true, 00:16:47.992 "hosts": [], 00:16:47.992 "serial_number": "SPDK00000000000002", 00:16:47.992 "model_number": "SPDK bdev Controller", 00:16:47.992 "max_namespaces": 32, 00:16:47.992 "min_cntlid": 1, 00:16:47.992 "max_cntlid": 65519, 00:16:47.992 "namespaces": [ 00:16:47.992 { 00:16:47.992 "nsid": 1, 00:16:47.992 "bdev_name": "Null2", 00:16:47.992 "name": "Null2", 00:16:47.992 "nguid": "E31C696EA6444A899697361EDD6A7510", 00:16:47.992 "uuid": "e31c696e-a644-4a89-9697-361edd6a7510" 00:16:47.992 } 00:16:47.992 ] 00:16:47.992 }, 00:16:47.992 { 00:16:47.992 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:47.992 "subtype": "NVMe", 00:16:47.992 "listen_addresses": [ 00:16:47.992 { 00:16:47.992 "trtype": "TCP", 00:16:47.992 "adrfam": "IPv4", 00:16:47.992 "traddr": "10.0.0.2", 00:16:47.992 "trsvcid": "4420" 00:16:47.992 } 00:16:47.992 ], 00:16:47.992 "allow_any_host": true, 00:16:47.992 "hosts": [], 00:16:47.992 "serial_number": "SPDK00000000000003", 00:16:47.992 "model_number": "SPDK bdev Controller", 00:16:47.992 "max_namespaces": 32, 00:16:47.992 "min_cntlid": 1, 00:16:47.992 "max_cntlid": 65519, 00:16:47.992 "namespaces": [ 00:16:47.992 { 00:16:47.992 "nsid": 1, 00:16:47.992 "bdev_name": "Null3", 00:16:47.992 "name": "Null3", 00:16:47.992 "nguid": "C2F32487186743C8A6FFF1BE64619F18", 00:16:47.992 "uuid": "c2f32487-1867-43c8-a6ff-f1be64619f18" 00:16:47.992 } 00:16:47.992 ] 00:16:47.992 }, 00:16:47.992 { 00:16:47.992 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:47.992 "subtype": "NVMe", 00:16:47.992 "listen_addresses": [ 00:16:47.992 { 00:16:47.992 "trtype": "TCP", 00:16:47.992 "adrfam": "IPv4", 00:16:47.992 "traddr": "10.0.0.2", 00:16:47.992 "trsvcid": "4420" 00:16:47.992 } 00:16:47.992 ], 00:16:47.992 "allow_any_host": true, 00:16:47.992 "hosts": [], 00:16:47.992 "serial_number": "SPDK00000000000004", 00:16:47.992 "model_number": "SPDK bdev Controller", 00:16:47.992 "max_namespaces": 32, 00:16:47.992 "min_cntlid": 1, 00:16:47.992 "max_cntlid": 65519, 00:16:47.992 "namespaces": [ 00:16:47.992 { 00:16:47.992 "nsid": 1, 00:16:47.992 "bdev_name": "Null4", 00:16:47.992 "name": "Null4", 00:16:47.992 "nguid": "3526E34E2F264F7B89C0AD9BE5373DAA", 00:16:47.992 "uuid": "3526e34e-2f26-4f7b-89c0-ad9be5373daa" 00:16:47.992 } 00:16:47.992 ] 00:16:47.992 } 00:16:47.992 ] 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.992 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.993 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.251 rmmod nvme_tcp 00:16:48.251 rmmod nvme_fabrics 00:16:48.251 rmmod nvme_keyring 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 827286 ']' 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 827286 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 827286 ']' 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 827286 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 827286 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 827286' 00:16:48.251 killing process with pid 827286 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 827286 00:16:48.251 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 827286 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.512 22:57:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.053 00:16:51.053 real 0m6.835s 00:16:51.053 user 0m5.612s 00:16:51.053 sys 0m2.859s 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.053 ************************************ 00:16:51.053 END TEST nvmf_target_discovery 00:16:51.053 ************************************ 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.053 ************************************ 00:16:51.053 START TEST nvmf_referrals 00:16:51.053 ************************************ 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:51.053 * Looking for test storage... 00:16:51.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.053 22:57:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.346 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:54.347 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:54.347 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:54.347 Found net devices under 0000:84:00.0: cvl_0_0 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:54.347 Found net devices under 0000:84:00.1: cvl_0_1 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.347 22:57:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:16:54.347 00:16:54.347 --- 10.0.0.2 ping statistics --- 00:16:54.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.347 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:16:54.347 00:16:54.347 --- 10.0.0.1 ping statistics --- 00:16:54.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.347 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.347 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=829510 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 829510 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 829510 ']' 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.348 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.348 [2024-07-22 22:57:30.301034] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:16:54.348 [2024-07-22 22:57:30.301165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.348 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.348 [2024-07-22 22:57:30.424766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.348 [2024-07-22 22:57:30.557939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.348 [2024-07-22 22:57:30.558026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.348 [2024-07-22 22:57:30.558053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.348 [2024-07-22 22:57:30.558077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.348 [2024-07-22 22:57:30.558097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.348 [2024-07-22 22:57:30.558245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.348 [2024-07-22 22:57:30.558389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.348 [2024-07-22 22:57:30.558425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.348 [2024-07-22 22:57:30.558429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 [2024-07-22 22:57:30.745641] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 [2024-07-22 22:57:30.762659] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.609 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:54.610 22:57:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:54.891 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:55.157 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:55.416 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:55.416 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:55.416 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:55.416 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:55.416 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:55.416 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.416 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:55.674 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:55.675 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.675 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:55.675 22:57:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:55.933 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.192 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.450 rmmod nvme_tcp 00:16:56.450 rmmod nvme_fabrics 00:16:56.450 rmmod nvme_keyring 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 829510 ']' 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 829510 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 829510 ']' 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 829510 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 829510 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 829510' 00:16:56.450 killing process with pid 829510 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 829510 00:16:56.450 22:57:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 829510 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.711 22:57:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:59.252 00:16:59.252 real 0m8.212s 00:16:59.252 user 0m11.838s 00:16:59.252 sys 0m3.276s 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.252 ************************************ 00:16:59.252 END TEST nvmf_referrals 00:16:59.252 ************************************ 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.252 ************************************ 00:16:59.252 START TEST nvmf_connect_disconnect 00:16:59.252 ************************************ 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:59.252 * Looking for test storage... 00:16:59.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.252 22:57:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:02.547 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:02.547 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:02.547 Found net devices under 0000:84:00.0: cvl_0_0 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:02.547 Found net devices under 0000:84:00.1: cvl_0_1 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.547 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:17:02.548 00:17:02.548 --- 10.0.0.2 ping statistics --- 00:17:02.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.548 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:17:02.548 00:17:02.548 --- 10.0.0.1 ping statistics --- 00:17:02.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.548 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=831946 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 831946 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 831946 ']' 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.548 22:57:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:02.548 [2024-07-22 22:57:38.708748] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:17:02.548 [2024-07-22 22:57:38.708909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.548 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.807 [2024-07-22 22:57:38.889721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.807 [2024-07-22 22:57:39.053911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.807 [2024-07-22 22:57:39.053971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.807 [2024-07-22 22:57:39.053991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.807 [2024-07-22 22:57:39.054008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.807 [2024-07-22 22:57:39.054023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.807 [2024-07-22 22:57:39.057336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.807 [2024-07-22 22:57:39.057381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.807 [2024-07-22 22:57:39.057437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.807 [2024-07-22 22:57:39.057442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.066 [2024-07-22 22:57:39.249646] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.066 [2024-07-22 22:57:39.316356] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:17:03.066 22:57:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:05.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:13.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:18.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:25.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:34.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:39.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:43.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:48.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:50.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:55.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:00.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:02.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:04.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:07.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:09.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:18.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:20.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:23.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:25.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:27.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:30.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:32.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:37.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:39.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:41.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:44.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:46.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:48.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:51.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:53.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:55.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.813 rmmod nvme_tcp 00:20:55.813 rmmod nvme_fabrics 00:20:55.813 rmmod nvme_keyring 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 831946 ']' 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 831946 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 831946 ']' 00:20:55.813 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 831946 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 831946 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 831946' 00:20:55.814 killing process with pid 831946 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 831946 00:20:55.814 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 831946 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.075 23:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.981 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.981 00:20:57.981 real 3m59.131s 00:20:57.981 user 15m6.461s 00:20:57.981 sys 0m36.979s 00:20:57.981 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.981 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:57.981 ************************************ 00:20:57.981 END TEST nvmf_connect_disconnect 00:20:57.981 ************************************ 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:58.241 ************************************ 00:20:58.241 START TEST nvmf_multitarget 00:20:58.241 ************************************ 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:20:58.241 * Looking for test storage... 00:20:58.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.241 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:20:58.242 23:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.536 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:01.537 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:01.537 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:01.537 Found net devices under 0000:84:00.0: cvl_0_0 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:01.537 Found net devices under 0000:84:00.1: cvl_0_1 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:21:01.537 00:21:01.537 --- 10.0.0.2 ping statistics --- 00:21:01.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.537 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:01.537 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:21:01.798 00:21:01.798 --- 10.0.0.1 ping statistics --- 00:21:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.798 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=861855 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 861855 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 861855 ']' 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.798 23:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:01.798 [2024-07-22 23:01:37.993698] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:21:01.798 [2024-07-22 23:01:37.993857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.798 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.061 [2024-07-22 23:01:38.145874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.061 [2024-07-22 23:01:38.298278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.061 [2024-07-22 23:01:38.298390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.061 [2024-07-22 23:01:38.298426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.061 [2024-07-22 23:01:38.298457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.061 [2024-07-22 23:01:38.298483] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.061 [2024-07-22 23:01:38.298602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.061 [2024-07-22 23:01:38.298682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.061 [2024-07-22 23:01:38.298740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.061 [2024-07-22 23:01:38.298744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.999 23:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.999 23:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:21:02.999 23:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.999 23:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.999 23:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:02.999 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.999 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:02.999 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:02.999 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:21:02.999 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:21:02.999 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:21:02.999 "nvmf_tgt_1" 00:21:02.999 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:21:03.259 "nvmf_tgt_2" 00:21:03.259 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:03.259 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:21:03.542 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:21:03.542 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:21:03.804 true 00:21:03.805 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:21:04.065 true 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.065 rmmod nvme_tcp 00:21:04.065 rmmod nvme_fabrics 00:21:04.065 rmmod nvme_keyring 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 861855 ']' 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 861855 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 861855 ']' 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 861855 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.065 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861855 00:21:04.325 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:04.325 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:04.325 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861855' 00:21:04.325 killing process with pid 861855 00:21:04.325 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 861855 00:21:04.325 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 861855 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.584 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.494 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.753 00:21:06.753 real 0m8.460s 00:21:06.753 user 0m12.034s 00:21:06.753 sys 0m3.318s 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:06.753 ************************************ 00:21:06.753 END TEST nvmf_multitarget 00:21:06.753 ************************************ 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:06.753 ************************************ 00:21:06.753 START TEST nvmf_rpc 00:21:06.753 ************************************ 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:21:06.753 * Looking for test storage... 00:21:06.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.753 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:10.050 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:10.050 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:10.050 Found net devices under 0000:84:00.0: cvl_0_0 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.050 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:10.051 Found net devices under 0000:84:00.1: cvl_0_1 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:21:10.051 00:21:10.051 --- 10.0.0.2 ping statistics --- 00:21:10.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.051 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:21:10.051 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:21:10.312 00:21:10.312 --- 10.0.0.1 ping statistics --- 00:21:10.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.312 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=864235 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 864235 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 864235 ']' 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.312 23:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.312 [2024-07-22 23:01:46.514003] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:21:10.312 [2024-07-22 23:01:46.514182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.312 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.572 [2024-07-22 23:01:46.667578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.572 [2024-07-22 23:01:46.822723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.572 [2024-07-22 23:01:46.822826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.572 [2024-07-22 23:01:46.822861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.572 [2024-07-22 23:01:46.822892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.572 [2024-07-22 23:01:46.822919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.572 [2024-07-22 23:01:46.823041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.572 [2024-07-22 23:01:46.823104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.572 [2024-07-22 23:01:46.823145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.572 [2024-07-22 23:01:46.823150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:21:10.832 "tick_rate": 2700000000, 00:21:10.832 "poll_groups": [ 00:21:10.832 { 00:21:10.832 "name": "nvmf_tgt_poll_group_000", 00:21:10.832 "admin_qpairs": 0, 00:21:10.832 "io_qpairs": 0, 00:21:10.832 "current_admin_qpairs": 0, 00:21:10.832 "current_io_qpairs": 0, 00:21:10.832 "pending_bdev_io": 0, 00:21:10.832 "completed_nvme_io": 0, 00:21:10.832 "transports": [] 00:21:10.832 }, 00:21:10.832 { 00:21:10.832 "name": "nvmf_tgt_poll_group_001", 00:21:10.832 "admin_qpairs": 0, 00:21:10.832 "io_qpairs": 0, 00:21:10.832 "current_admin_qpairs": 0, 00:21:10.832 "current_io_qpairs": 0, 00:21:10.832 "pending_bdev_io": 0, 00:21:10.832 "completed_nvme_io": 0, 00:21:10.832 "transports": [] 00:21:10.832 }, 00:21:10.832 { 00:21:10.832 "name": "nvmf_tgt_poll_group_002", 00:21:10.832 "admin_qpairs": 0, 00:21:10.832 "io_qpairs": 0, 00:21:10.832 "current_admin_qpairs": 0, 00:21:10.832 "current_io_qpairs": 0, 00:21:10.832 "pending_bdev_io": 0, 00:21:10.832 "completed_nvme_io": 0, 00:21:10.832 "transports": [] 00:21:10.832 }, 00:21:10.832 { 00:21:10.832 "name": "nvmf_tgt_poll_group_003", 00:21:10.832 "admin_qpairs": 0, 00:21:10.832 "io_qpairs": 0, 00:21:10.832 "current_admin_qpairs": 0, 00:21:10.832 "current_io_qpairs": 0, 00:21:10.832 "pending_bdev_io": 0, 00:21:10.832 "completed_nvme_io": 0, 00:21:10.832 "transports": [] 00:21:10.832 } 00:21:10.832 ] 00:21:10.832 }' 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:21:10.832 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.092 [2024-07-22 23:01:47.190288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:21:11.092 "tick_rate": 2700000000, 00:21:11.092 "poll_groups": [ 00:21:11.092 { 00:21:11.092 "name": "nvmf_tgt_poll_group_000", 00:21:11.092 "admin_qpairs": 0, 00:21:11.092 "io_qpairs": 0, 00:21:11.092 "current_admin_qpairs": 0, 00:21:11.092 "current_io_qpairs": 0, 00:21:11.092 "pending_bdev_io": 0, 00:21:11.092 "completed_nvme_io": 0, 00:21:11.092 "transports": [ 00:21:11.092 { 00:21:11.092 "trtype": "TCP" 00:21:11.092 } 00:21:11.092 ] 00:21:11.092 }, 00:21:11.092 { 00:21:11.092 "name": "nvmf_tgt_poll_group_001", 00:21:11.092 "admin_qpairs": 0, 00:21:11.092 "io_qpairs": 0, 00:21:11.092 "current_admin_qpairs": 0, 00:21:11.092 "current_io_qpairs": 0, 00:21:11.092 "pending_bdev_io": 0, 00:21:11.092 "completed_nvme_io": 0, 00:21:11.092 "transports": [ 00:21:11.092 { 00:21:11.092 "trtype": "TCP" 00:21:11.092 } 00:21:11.092 ] 00:21:11.092 }, 00:21:11.092 { 00:21:11.092 "name": "nvmf_tgt_poll_group_002", 00:21:11.092 "admin_qpairs": 0, 00:21:11.092 "io_qpairs": 0, 00:21:11.092 "current_admin_qpairs": 0, 00:21:11.092 "current_io_qpairs": 0, 00:21:11.092 "pending_bdev_io": 0, 00:21:11.092 "completed_nvme_io": 0, 00:21:11.092 "transports": [ 00:21:11.092 { 00:21:11.092 "trtype": "TCP" 00:21:11.092 } 00:21:11.092 ] 00:21:11.092 }, 00:21:11.092 { 00:21:11.092 "name": "nvmf_tgt_poll_group_003", 00:21:11.092 "admin_qpairs": 0, 00:21:11.092 "io_qpairs": 0, 00:21:11.092 "current_admin_qpairs": 0, 00:21:11.092 "current_io_qpairs": 0, 00:21:11.092 "pending_bdev_io": 0, 00:21:11.092 "completed_nvme_io": 0, 00:21:11.092 "transports": [ 00:21:11.092 { 00:21:11.092 "trtype": "TCP" 00:21:11.092 } 00:21:11.092 ] 00:21:11.092 } 00:21:11.092 ] 00:21:11.092 }' 00:21:11.092 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.093 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.353 Malloc1 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.353 [2024-07-22 23:01:47.437669] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:21:11.353 [2024-07-22 23:01:47.470401] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:21:11.353 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:11.353 could not add new controller: failed to write to nvme-fabrics device 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:11.353 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:11.354 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:11.354 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.354 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.354 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.354 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:11.923 23:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:21:11.923 23:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:21:11.923 23:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:11.923 23:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:11.923 23:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:14.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:14.459 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:14.460 [2024-07-22 23:01:50.310981] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:21:14.460 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:14.460 could not add new controller: failed to write to nvme-fabrics device 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.460 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:14.722 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:21:14.722 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:21:14.722 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:14.722 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:14.722 23:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:21:17.259 23:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:17.259 23:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:17.259 23:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:17.259 23:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:17.259 23:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.259 23:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:21:17.259 23:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:17.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.259 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 [2024-07-22 23:01:53.124789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.260 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:17.829 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:17.829 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:21:17.829 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:17.829 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:17.829 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:19.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.739 [2024-07-22 23:01:55.969698] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.739 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:20.307 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:20.307 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:21:20.307 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:20.307 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:20.307 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:22.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.847 [2024-07-22 23:01:58.794726] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.847 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:23.417 23:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:23.417 23:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:21:23.417 23:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:23.417 23:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:23.417 23:01:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:25.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:25.328 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.329 [2024-07-22 23:02:01.634904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.329 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.589 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.589 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:25.589 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.589 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.589 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.589 23:02:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:26.175 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:26.175 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:21:26.175 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:26.175 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:26.175 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:28.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.138 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.399 [2024-07-22 23:02:04.457837] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.399 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:28.968 23:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:28.968 23:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:21:28.968 23:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:28.968 23:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:28.968 23:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:21:30.880 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:30.880 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:30.880 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:31.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 [2024-07-22 23:02:07.367428] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 [2024-07-22 23:02:07.415511] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.140 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.141 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 [2024-07-22 23:02:07.463688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 [2024-07-22 23:02:07.511894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.400 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 [2024-07-22 23:02:07.560066] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:21:31.401 "tick_rate": 2700000000, 00:21:31.401 "poll_groups": [ 00:21:31.401 { 00:21:31.401 "name": "nvmf_tgt_poll_group_000", 00:21:31.401 "admin_qpairs": 2, 00:21:31.401 "io_qpairs": 84, 00:21:31.401 "current_admin_qpairs": 0, 00:21:31.401 "current_io_qpairs": 0, 00:21:31.401 "pending_bdev_io": 0, 00:21:31.401 "completed_nvme_io": 184, 00:21:31.401 "transports": [ 00:21:31.401 { 00:21:31.401 "trtype": "TCP" 00:21:31.401 } 00:21:31.401 ] 00:21:31.401 }, 00:21:31.401 { 00:21:31.401 "name": "nvmf_tgt_poll_group_001", 00:21:31.401 "admin_qpairs": 2, 00:21:31.401 "io_qpairs": 84, 00:21:31.401 "current_admin_qpairs": 0, 00:21:31.401 "current_io_qpairs": 0, 00:21:31.401 "pending_bdev_io": 0, 00:21:31.401 "completed_nvme_io": 294, 00:21:31.401 "transports": [ 00:21:31.401 { 00:21:31.401 "trtype": "TCP" 00:21:31.401 } 00:21:31.401 ] 00:21:31.401 }, 00:21:31.401 { 00:21:31.401 "name": "nvmf_tgt_poll_group_002", 00:21:31.401 "admin_qpairs": 1, 00:21:31.401 "io_qpairs": 84, 00:21:31.401 "current_admin_qpairs": 0, 00:21:31.401 "current_io_qpairs": 0, 00:21:31.401 "pending_bdev_io": 0, 00:21:31.401 "completed_nvme_io": 122, 00:21:31.401 "transports": [ 00:21:31.401 { 00:21:31.401 "trtype": "TCP" 00:21:31.401 } 00:21:31.401 ] 00:21:31.401 }, 00:21:31.401 { 00:21:31.401 "name": "nvmf_tgt_poll_group_003", 00:21:31.401 "admin_qpairs": 2, 00:21:31.401 "io_qpairs": 84, 00:21:31.401 "current_admin_qpairs": 0, 00:21:31.401 "current_io_qpairs": 0, 00:21:31.401 "pending_bdev_io": 0, 00:21:31.401 "completed_nvme_io": 86, 00:21:31.401 "transports": [ 00:21:31.401 { 00:21:31.401 "trtype": "TCP" 00:21:31.401 } 00:21:31.401 ] 00:21:31.401 } 00:21:31.401 ] 00:21:31.401 }' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:31.401 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.661 rmmod nvme_tcp 00:21:31.661 rmmod nvme_fabrics 00:21:31.661 rmmod nvme_keyring 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 864235 ']' 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 864235 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 864235 ']' 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 864235 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864235 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864235' 00:21:31.661 killing process with pid 864235 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 864235 00:21:31.661 23:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 864235 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.921 23:02:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.461 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:34.461 00:21:34.461 real 0m27.401s 00:21:34.461 user 1m25.429s 00:21:34.461 sys 0m5.379s 00:21:34.461 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.461 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:34.461 ************************************ 00:21:34.461 END TEST nvmf_rpc 00:21:34.461 ************************************ 00:21:34.461 23:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.462 ************************************ 00:21:34.462 START TEST nvmf_invalid 00:21:34.462 ************************************ 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:34.462 * Looking for test storage... 00:21:34.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.462 23:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:37.758 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.758 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:37.758 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:37.759 Found net devices under 0000:84:00.0: cvl_0_0 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:37.759 Found net devices under 0000:84:00.1: cvl_0_1 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:21:37.759 00:21:37.759 --- 10.0.0.2 ping statistics --- 00:21:37.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.759 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:21:37.759 00:21:37.759 --- 10.0.0.1 ping statistics --- 00:21:37.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.759 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=868844 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 868844 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 868844 ']' 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.759 23:02:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:37.759 [2024-07-22 23:02:13.963413] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:21:37.759 [2024-07-22 23:02:13.963578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.759 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.019 [2024-07-22 23:02:14.114636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.019 [2024-07-22 23:02:14.275147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.019 [2024-07-22 23:02:14.275211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.019 [2024-07-22 23:02:14.275231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.019 [2024-07-22 23:02:14.275247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.019 [2024-07-22 23:02:14.275261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.019 [2024-07-22 23:02:14.275343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.019 [2024-07-22 23:02:14.275437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.019 [2024-07-22 23:02:14.275441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.019 [2024-07-22 23:02:14.275380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:38.957 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20690 00:21:39.215 [2024-07-22 23:02:15.425888] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:21:39.215 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:21:39.215 { 00:21:39.215 "nqn": "nqn.2016-06.io.spdk:cnode20690", 00:21:39.215 "tgt_name": "foobar", 00:21:39.215 "method": "nvmf_create_subsystem", 00:21:39.215 "req_id": 1 00:21:39.215 } 00:21:39.215 Got JSON-RPC error response 00:21:39.215 response: 00:21:39.215 { 00:21:39.215 "code": -32603, 00:21:39.215 "message": "Unable to find target foobar" 00:21:39.215 }' 00:21:39.215 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:21:39.215 { 00:21:39.215 "nqn": "nqn.2016-06.io.spdk:cnode20690", 00:21:39.215 "tgt_name": "foobar", 00:21:39.215 "method": "nvmf_create_subsystem", 00:21:39.215 "req_id": 1 00:21:39.215 } 00:21:39.215 Got JSON-RPC error response 00:21:39.215 response: 00:21:39.215 { 00:21:39.215 "code": -32603, 00:21:39.215 "message": "Unable to find target foobar" 00:21:39.215 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:21:39.216 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:21:39.216 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12876 00:21:39.476 [2024-07-22 23:02:15.731000] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12876: invalid serial number 'SPDKISFASTANDAWESOME' 00:21:39.476 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:21:39.476 { 00:21:39.476 "nqn": "nqn.2016-06.io.spdk:cnode12876", 00:21:39.476 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:21:39.476 "method": "nvmf_create_subsystem", 00:21:39.476 "req_id": 1 00:21:39.476 } 00:21:39.476 Got JSON-RPC error response 00:21:39.476 response: 00:21:39.476 { 00:21:39.476 "code": -32602, 00:21:39.476 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:21:39.476 }' 00:21:39.476 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:21:39.476 { 00:21:39.476 "nqn": "nqn.2016-06.io.spdk:cnode12876", 00:21:39.476 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:21:39.476 "method": "nvmf_create_subsystem", 00:21:39.476 "req_id": 1 00:21:39.476 } 00:21:39.476 Got JSON-RPC error response 00:21:39.476 response: 00:21:39.476 { 00:21:39.476 "code": -32602, 00:21:39.476 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:21:39.476 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:21:39.476 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:21:39.476 23:02:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25214 00:21:40.047 [2024-07-22 23:02:16.296989] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25214: invalid model number 'SPDK_Controller' 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:21:40.047 { 00:21:40.047 "nqn": "nqn.2016-06.io.spdk:cnode25214", 00:21:40.047 "model_number": "SPDK_Controller\u001f", 00:21:40.047 "method": "nvmf_create_subsystem", 00:21:40.047 "req_id": 1 00:21:40.047 } 00:21:40.047 Got JSON-RPC error response 00:21:40.047 response: 00:21:40.047 { 00:21:40.047 "code": -32602, 00:21:40.047 "message": "Invalid MN SPDK_Controller\u001f" 00:21:40.047 }' 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:21:40.047 { 00:21:40.047 "nqn": "nqn.2016-06.io.spdk:cnode25214", 00:21:40.047 "model_number": "SPDK_Controller\u001f", 00:21:40.047 "method": "nvmf_create_subsystem", 00:21:40.047 "req_id": 1 00:21:40.047 } 00:21:40.047 Got JSON-RPC error response 00:21:40.047 response: 00:21:40.047 { 00:21:40.047 "code": -32602, 00:21:40.047 "message": "Invalid MN SPDK_Controller\u001f" 00:21:40.047 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.047 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.048 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ELHYIRl}#'\'';km{RB&ii+{' 00:21:40.309 23:02:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ELHYIRl}#'\'';km{RB&ii+{' nqn.2016-06.io.spdk:cnode14745 00:21:40.880 [2024-07-22 23:02:17.023565] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14745: invalid serial number 'ELHYIRl}#';km{RB&ii+{' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:21:40.880 { 00:21:40.880 "nqn": "nqn.2016-06.io.spdk:cnode14745", 00:21:40.880 "serial_number": "ELHYIRl}#'\'';km{RB&ii+{", 00:21:40.880 "method": "nvmf_create_subsystem", 00:21:40.880 "req_id": 1 00:21:40.880 } 00:21:40.880 Got JSON-RPC error response 00:21:40.880 response: 00:21:40.880 { 00:21:40.880 "code": -32602, 00:21:40.880 "message": "Invalid SN ELHYIRl}#'\'';km{RB&ii+{" 00:21:40.880 }' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:21:40.880 { 00:21:40.880 "nqn": "nqn.2016-06.io.spdk:cnode14745", 00:21:40.880 "serial_number": "ELHYIRl}#';km{RB&ii+{", 00:21:40.880 "method": "nvmf_create_subsystem", 00:21:40.880 "req_id": 1 00:21:40.880 } 00:21:40.880 Got JSON-RPC error response 00:21:40.880 response: 00:21:40.880 { 00:21:40.880 "code": -32602, 00:21:40.880 "message": "Invalid SN ELHYIRl}#';km{RB&ii+{" 00:21:40.880 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:21:40.880 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:40.881 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:21:41.140 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1[Z4yMFwbrC3?Rjw7j D-O~=C9A-"NQO('\''AlK~H~)' 00:21:41.141 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1[Z4yMFwbrC3?Rjw7j D-O~=C9A-"NQO('\''AlK~H~)' nqn.2016-06.io.spdk:cnode2126 00:21:41.399 [2024-07-22 23:02:17.537472] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2126: invalid model number '1[Z4yMFwbrC3?Rjw7j D-O~=C9A-"NQO('AlK~H~)' 00:21:41.399 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:21:41.399 { 00:21:41.399 "nqn": "nqn.2016-06.io.spdk:cnode2126", 00:21:41.399 "model_number": "1[Z4yMFwbrC3?Rjw7j D-O~=C9A-\"NQO('\''AlK~H~)", 00:21:41.399 "method": "nvmf_create_subsystem", 00:21:41.399 "req_id": 1 00:21:41.399 } 00:21:41.399 Got JSON-RPC error response 00:21:41.399 response: 00:21:41.399 { 00:21:41.399 "code": -32602, 00:21:41.399 "message": "Invalid MN 1[Z4yMFwbrC3?Rjw7j D-O~=C9A-\"NQO('\''AlK~H~)" 00:21:41.399 }' 00:21:41.399 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:21:41.399 { 00:21:41.399 "nqn": "nqn.2016-06.io.spdk:cnode2126", 00:21:41.399 "model_number": "1[Z4yMFwbrC3?Rjw7j D-O~=C9A-\"NQO('AlK~H~)", 00:21:41.399 "method": "nvmf_create_subsystem", 00:21:41.399 "req_id": 1 00:21:41.399 } 00:21:41.399 Got JSON-RPC error response 00:21:41.399 response: 00:21:41.399 { 00:21:41.399 "code": -32602, 00:21:41.399 "message": "Invalid MN 1[Z4yMFwbrC3?Rjw7j D-O~=C9A-\"NQO('AlK~H~)" 00:21:41.399 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:21:41.399 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:21:41.659 [2024-07-22 23:02:17.838585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.659 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:21:42.230 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:21:42.230 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:21:42.230 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:21:42.230 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:21:42.230 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:21:42.801 [2024-07-22 23:02:19.068244] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:21:42.801 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:21:42.801 { 00:21:42.801 "nqn": "nqn.2016-06.io.spdk:cnode", 00:21:42.801 "listen_address": { 00:21:42.801 "trtype": "tcp", 00:21:42.801 "traddr": "", 00:21:42.801 "trsvcid": "4421" 00:21:42.801 }, 00:21:42.801 "method": "nvmf_subsystem_remove_listener", 00:21:42.801 "req_id": 1 00:21:42.801 } 00:21:42.801 Got JSON-RPC error response 00:21:42.801 response: 00:21:42.801 { 00:21:42.801 "code": -32602, 00:21:42.801 "message": "Invalid parameters" 00:21:42.801 }' 00:21:42.801 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:21:42.801 { 00:21:42.801 "nqn": "nqn.2016-06.io.spdk:cnode", 00:21:42.801 "listen_address": { 00:21:42.801 "trtype": "tcp", 00:21:42.801 "traddr": "", 00:21:42.801 "trsvcid": "4421" 00:21:42.801 }, 00:21:42.801 "method": "nvmf_subsystem_remove_listener", 00:21:42.801 "req_id": 1 00:21:42.801 } 00:21:42.801 Got JSON-RPC error response 00:21:42.801 response: 00:21:42.801 { 00:21:42.801 "code": -32602, 00:21:42.801 "message": "Invalid parameters" 00:21:42.801 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:21:42.801 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17007 -i 0 00:21:43.742 [2024-07-22 23:02:19.702436] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17007: invalid cntlid range [0-65519] 00:21:43.742 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:21:43.742 { 00:21:43.742 "nqn": "nqn.2016-06.io.spdk:cnode17007", 00:21:43.742 "min_cntlid": 0, 00:21:43.742 "method": "nvmf_create_subsystem", 00:21:43.742 "req_id": 1 00:21:43.742 } 00:21:43.742 Got JSON-RPC error response 00:21:43.742 response: 00:21:43.742 { 00:21:43.742 "code": -32602, 00:21:43.742 "message": "Invalid cntlid range [0-65519]" 00:21:43.742 }' 00:21:43.742 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:21:43.742 { 00:21:43.742 "nqn": "nqn.2016-06.io.spdk:cnode17007", 00:21:43.742 "min_cntlid": 0, 00:21:43.742 "method": "nvmf_create_subsystem", 00:21:43.742 "req_id": 1 00:21:43.742 } 00:21:43.742 Got JSON-RPC error response 00:21:43.742 response: 00:21:43.742 { 00:21:43.742 "code": -32602, 00:21:43.742 "message": "Invalid cntlid range [0-65519]" 00:21:43.742 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:43.742 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26521 -i 65520 00:21:44.001 [2024-07-22 23:02:20.272474] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26521: invalid cntlid range [65520-65519] 00:21:44.001 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:21:44.001 { 00:21:44.001 "nqn": "nqn.2016-06.io.spdk:cnode26521", 00:21:44.001 "min_cntlid": 65520, 00:21:44.001 "method": "nvmf_create_subsystem", 00:21:44.001 "req_id": 1 00:21:44.001 } 00:21:44.001 Got JSON-RPC error response 00:21:44.001 response: 00:21:44.001 { 00:21:44.001 "code": -32602, 00:21:44.001 "message": "Invalid cntlid range [65520-65519]" 00:21:44.001 }' 00:21:44.001 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:21:44.001 { 00:21:44.001 "nqn": "nqn.2016-06.io.spdk:cnode26521", 00:21:44.001 "min_cntlid": 65520, 00:21:44.001 "method": "nvmf_create_subsystem", 00:21:44.001 "req_id": 1 00:21:44.001 } 00:21:44.001 Got JSON-RPC error response 00:21:44.001 response: 00:21:44.001 { 00:21:44.001 "code": -32602, 00:21:44.001 "message": "Invalid cntlid range [65520-65519]" 00:21:44.001 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:44.002 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7095 -I 0 00:21:44.569 [2024-07-22 23:02:20.581578] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7095: invalid cntlid range [1-0] 00:21:44.569 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:21:44.569 { 00:21:44.569 "nqn": "nqn.2016-06.io.spdk:cnode7095", 00:21:44.569 "max_cntlid": 0, 00:21:44.569 "method": "nvmf_create_subsystem", 00:21:44.569 "req_id": 1 00:21:44.569 } 00:21:44.569 Got JSON-RPC error response 00:21:44.569 response: 00:21:44.569 { 00:21:44.569 "code": -32602, 00:21:44.569 "message": "Invalid cntlid range [1-0]" 00:21:44.569 }' 00:21:44.569 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:21:44.569 { 00:21:44.569 "nqn": "nqn.2016-06.io.spdk:cnode7095", 00:21:44.569 "max_cntlid": 0, 00:21:44.569 "method": "nvmf_create_subsystem", 00:21:44.569 "req_id": 1 00:21:44.569 } 00:21:44.569 Got JSON-RPC error response 00:21:44.569 response: 00:21:44.569 { 00:21:44.569 "code": -32602, 00:21:44.569 "message": "Invalid cntlid range [1-0]" 00:21:44.569 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:44.569 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4982 -I 65520 00:21:44.829 [2024-07-22 23:02:21.131637] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4982: invalid cntlid range [1-65520] 00:21:45.088 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:21:45.088 { 00:21:45.088 "nqn": "nqn.2016-06.io.spdk:cnode4982", 00:21:45.088 "max_cntlid": 65520, 00:21:45.088 "method": "nvmf_create_subsystem", 00:21:45.088 "req_id": 1 00:21:45.088 } 00:21:45.088 Got JSON-RPC error response 00:21:45.088 response: 00:21:45.088 { 00:21:45.088 "code": -32602, 00:21:45.088 "message": "Invalid cntlid range [1-65520]" 00:21:45.088 }' 00:21:45.088 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:21:45.088 { 00:21:45.088 "nqn": "nqn.2016-06.io.spdk:cnode4982", 00:21:45.088 "max_cntlid": 65520, 00:21:45.088 "method": "nvmf_create_subsystem", 00:21:45.088 "req_id": 1 00:21:45.088 } 00:21:45.088 Got JSON-RPC error response 00:21:45.088 response: 00:21:45.088 { 00:21:45.088 "code": -32602, 00:21:45.088 "message": "Invalid cntlid range [1-65520]" 00:21:45.088 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:45.088 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11071 -i 6 -I 5 00:21:45.656 [2024-07-22 23:02:21.773892] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11071: invalid cntlid range [6-5] 00:21:45.656 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:21:45.656 { 00:21:45.656 "nqn": "nqn.2016-06.io.spdk:cnode11071", 00:21:45.656 "min_cntlid": 6, 00:21:45.656 "max_cntlid": 5, 00:21:45.656 "method": "nvmf_create_subsystem", 00:21:45.656 "req_id": 1 00:21:45.656 } 00:21:45.656 Got JSON-RPC error response 00:21:45.656 response: 00:21:45.656 { 00:21:45.656 "code": -32602, 00:21:45.656 "message": "Invalid cntlid range [6-5]" 00:21:45.656 }' 00:21:45.656 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:21:45.656 { 00:21:45.656 "nqn": "nqn.2016-06.io.spdk:cnode11071", 00:21:45.656 "min_cntlid": 6, 00:21:45.656 "max_cntlid": 5, 00:21:45.656 "method": "nvmf_create_subsystem", 00:21:45.656 "req_id": 1 00:21:45.656 } 00:21:45.656 Got JSON-RPC error response 00:21:45.656 response: 00:21:45.656 { 00:21:45.656 "code": -32602, 00:21:45.656 "message": "Invalid cntlid range [6-5]" 00:21:45.656 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:45.656 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:21:45.916 { 00:21:45.916 "name": "foobar", 00:21:45.916 "method": "nvmf_delete_target", 00:21:45.916 "req_id": 1 00:21:45.916 } 00:21:45.916 Got JSON-RPC error response 00:21:45.916 response: 00:21:45.916 { 00:21:45.916 "code": -32602, 00:21:45.916 "message": "The specified target doesn'\''t exist, cannot delete it." 00:21:45.916 }' 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:21:45.916 { 00:21:45.916 "name": "foobar", 00:21:45.916 "method": "nvmf_delete_target", 00:21:45.916 "req_id": 1 00:21:45.916 } 00:21:45.916 Got JSON-RPC error response 00:21:45.916 response: 00:21:45.916 { 00:21:45.916 "code": -32602, 00:21:45.916 "message": "The specified target doesn't exist, cannot delete it." 00:21:45.916 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.916 rmmod nvme_tcp 00:21:45.916 rmmod nvme_fabrics 00:21:45.916 rmmod nvme_keyring 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 868844 ']' 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 868844 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 868844 ']' 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 868844 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 868844 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 868844' 00:21:45.916 killing process with pid 868844 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 868844 00:21:45.916 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 868844 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.178 23:02:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.766 00:21:48.766 real 0m14.178s 00:21:48.766 user 0m39.641s 00:21:48.766 sys 0m4.094s 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:48.766 ************************************ 00:21:48.766 END TEST nvmf_invalid 00:21:48.766 ************************************ 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.766 23:02:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.766 ************************************ 00:21:48.766 START TEST nvmf_connect_stress 00:21:48.766 ************************************ 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:21:48.767 * Looking for test storage... 00:21:48.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.767 23:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.062 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:52.063 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:52.063 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:52.063 Found net devices under 0000:84:00.0: cvl_0_0 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:52.063 Found net devices under 0000:84:00.1: cvl_0_1 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.063 23:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:21:52.063 00:21:52.063 --- 10.0.0.2 ping statistics --- 00:21:52.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.063 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:21:52.063 00:21:52.063 --- 10.0.0.1 ping statistics --- 00:21:52.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.063 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=871935 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 871935 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 871935 ']' 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.063 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.064 [2024-07-22 23:02:28.183063] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:21:52.064 [2024-07-22 23:02:28.183231] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.064 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.064 [2024-07-22 23:02:28.318530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:52.325 [2024-07-22 23:02:28.429657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.325 [2024-07-22 23:02:28.429730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.325 [2024-07-22 23:02:28.429750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.325 [2024-07-22 23:02:28.429767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.325 [2024-07-22 23:02:28.429780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.325 [2024-07-22 23:02:28.429884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.325 [2024-07-22 23:02:28.429982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.325 [2024-07-22 23:02:28.429989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.325 [2024-07-22 23:02:28.610779] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.325 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.586 [2024-07-22 23:02:28.638250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.586 NULL1 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=872037 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.586 23:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:52.846 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.846 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:52.846 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:52.846 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.846 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:53.106 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.106 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:53.106 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:53.106 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.106 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:53.676 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.676 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:53.676 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:53.677 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.677 23:02:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:53.936 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.936 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:53.936 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:53.936 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.936 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:54.196 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.196 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:54.196 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:54.196 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.196 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:54.456 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.456 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:54.456 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:54.456 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.456 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:54.715 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.715 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:54.715 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:54.715 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.715 23:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:55.285 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.285 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:55.286 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:55.286 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.286 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:55.546 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.546 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:55.546 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:55.546 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.546 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:55.806 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.806 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:55.806 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:55.806 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.806 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:56.065 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.065 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:56.065 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:56.065 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.065 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:56.325 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.325 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:56.325 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:56.325 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.326 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:56.895 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.895 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:56.895 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:56.895 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.895 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:57.155 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.155 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:57.155 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:57.155 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.155 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:57.416 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.416 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:57.416 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:57.416 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.416 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:57.676 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.676 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:57.676 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:57.676 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.676 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:57.936 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.936 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:57.936 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:57.936 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.936 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:58.507 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.507 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:58.507 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:58.507 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.507 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:58.766 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.766 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:58.766 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:58.766 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.766 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:59.025 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.025 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:59.025 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:59.025 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.025 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:59.285 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.285 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:59.285 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:59.285 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.286 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:59.546 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.546 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:21:59.546 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:59.546 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.546 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:00.115 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.115 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:00.115 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:00.115 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.115 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:00.375 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.375 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:00.375 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:00.375 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.375 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:00.635 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.635 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:00.635 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:00.635 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.635 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:00.895 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.895 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:00.895 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:00.895 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.895 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:01.155 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.155 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:01.155 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:01.155 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.155 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:01.725 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.725 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:01.725 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:01.725 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.725 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:01.985 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.985 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:01.985 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:01.985 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.985 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:02.246 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.246 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:02.246 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:02.246 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.246 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:02.505 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.505 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:02.505 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:02.505 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.505 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:02.505 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 872037 00:22:02.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (872037) - No such process 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 872037 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.765 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.765 rmmod nvme_tcp 00:22:02.765 rmmod nvme_fabrics 00:22:02.765 rmmod nvme_keyring 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 871935 ']' 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 871935 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 871935 ']' 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 871935 00:22:03.025 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:22:03.026 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.026 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871935 00:22:03.026 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.026 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.026 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871935' 00:22:03.026 killing process with pid 871935 00:22:03.026 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 871935 00:22:03.026 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 871935 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.286 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.196 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.196 00:22:05.196 real 0m16.881s 00:22:05.196 user 0m38.984s 00:22:05.196 sys 0m7.327s 00:22:05.196 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.196 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:05.196 ************************************ 00:22:05.196 END TEST nvmf_connect_stress 00:22:05.196 ************************************ 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.457 ************************************ 00:22:05.457 START TEST nvmf_fused_ordering 00:22:05.457 ************************************ 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:22:05.457 * Looking for test storage... 00:22:05.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.457 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.458 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:08.787 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:08.787 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:08.787 Found net devices under 0000:84:00.0: cvl_0_0 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:08.787 Found net devices under 0000:84:00.1: cvl_0_1 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:22:08.787 00:22:08.787 --- 10.0.0.2 ping statistics --- 00:22:08.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.787 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:22:08.787 00:22:08.787 --- 10.0.0.1 ping statistics --- 00:22:08.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.787 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.787 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=875221 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 875221 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 875221 ']' 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.788 23:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:08.788 [2024-07-22 23:02:45.000344] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:22:08.788 [2024-07-22 23:02:45.000448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.788 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.788 [2024-07-22 23:02:45.084346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.046 [2024-07-22 23:02:45.190280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.046 [2024-07-22 23:02:45.190354] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.046 [2024-07-22 23:02:45.190375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.046 [2024-07-22 23:02:45.190390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.046 [2024-07-22 23:02:45.190405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.046 [2024-07-22 23:02:45.190450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.046 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.046 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:22:09.046 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.046 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.046 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:09.307 [2024-07-22 23:02:45.368996] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:09.307 [2024-07-22 23:02:45.385177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:09.307 NULL1 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:22:09.307 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.308 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.308 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:22:09.308 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.308 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.308 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:09.308 [2024-07-22 23:02:45.433963] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:22:09.308 [2024-07-22 23:02:45.434024] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875334 ] 00:22:09.308 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.878 Attached to nqn.2016-06.io.spdk:cnode1 00:22:09.878 Namespace ID: 1 size: 1GB 00:22:09.878 fused_ordering(0) 00:22:09.878 fused_ordering(1) 00:22:09.878 fused_ordering(2) 00:22:09.878 fused_ordering(3) 00:22:09.878 fused_ordering(4) 00:22:09.878 fused_ordering(5) 00:22:09.878 fused_ordering(6) 00:22:09.878 fused_ordering(7) 00:22:09.878 fused_ordering(8) 00:22:09.878 fused_ordering(9) 00:22:09.878 fused_ordering(10) 00:22:09.878 fused_ordering(11) 00:22:09.878 fused_ordering(12) 00:22:09.878 fused_ordering(13) 00:22:09.878 fused_ordering(14) 00:22:09.878 fused_ordering(15) 00:22:09.878 fused_ordering(16) 00:22:09.878 fused_ordering(17) 00:22:09.878 fused_ordering(18) 00:22:09.878 fused_ordering(19) 00:22:09.878 fused_ordering(20) 00:22:09.878 fused_ordering(21) 00:22:09.878 fused_ordering(22) 00:22:09.878 fused_ordering(23) 00:22:09.878 fused_ordering(24) 00:22:09.878 fused_ordering(25) 00:22:09.878 fused_ordering(26) 00:22:09.878 fused_ordering(27) 00:22:09.878 fused_ordering(28) 00:22:09.878 fused_ordering(29) 00:22:09.878 fused_ordering(30) 00:22:09.878 fused_ordering(31) 00:22:09.878 fused_ordering(32) 00:22:09.878 fused_ordering(33) 00:22:09.878 fused_ordering(34) 00:22:09.878 fused_ordering(35) 00:22:09.878 fused_ordering(36) 00:22:09.878 fused_ordering(37) 00:22:09.878 fused_ordering(38) 00:22:09.878 fused_ordering(39) 00:22:09.878 fused_ordering(40) 00:22:09.878 fused_ordering(41) 00:22:09.878 fused_ordering(42) 00:22:09.878 fused_ordering(43) 00:22:09.878 fused_ordering(44) 00:22:09.878 fused_ordering(45) 00:22:09.878 fused_ordering(46) 00:22:09.878 fused_ordering(47) 00:22:09.878 fused_ordering(48) 00:22:09.878 fused_ordering(49) 00:22:09.878 fused_ordering(50) 00:22:09.878 fused_ordering(51) 00:22:09.878 fused_ordering(52) 00:22:09.878 fused_ordering(53) 00:22:09.878 fused_ordering(54) 00:22:09.878 fused_ordering(55) 00:22:09.878 fused_ordering(56) 00:22:09.878 fused_ordering(57) 00:22:09.878 fused_ordering(58) 00:22:09.878 fused_ordering(59) 00:22:09.878 fused_ordering(60) 00:22:09.878 fused_ordering(61) 00:22:09.878 fused_ordering(62) 00:22:09.878 fused_ordering(63) 00:22:09.878 fused_ordering(64) 00:22:09.878 fused_ordering(65) 00:22:09.878 fused_ordering(66) 00:22:09.878 fused_ordering(67) 00:22:09.878 fused_ordering(68) 00:22:09.878 fused_ordering(69) 00:22:09.879 fused_ordering(70) 00:22:09.879 fused_ordering(71) 00:22:09.879 fused_ordering(72) 00:22:09.879 fused_ordering(73) 00:22:09.879 fused_ordering(74) 00:22:09.879 fused_ordering(75) 00:22:09.879 fused_ordering(76) 00:22:09.879 fused_ordering(77) 00:22:09.879 fused_ordering(78) 00:22:09.879 fused_ordering(79) 00:22:09.879 fused_ordering(80) 00:22:09.879 fused_ordering(81) 00:22:09.879 fused_ordering(82) 00:22:09.879 fused_ordering(83) 00:22:09.879 fused_ordering(84) 00:22:09.879 fused_ordering(85) 00:22:09.879 fused_ordering(86) 00:22:09.879 fused_ordering(87) 00:22:09.879 fused_ordering(88) 00:22:09.879 fused_ordering(89) 00:22:09.879 fused_ordering(90) 00:22:09.879 fused_ordering(91) 00:22:09.879 fused_ordering(92) 00:22:09.879 fused_ordering(93) 00:22:09.879 fused_ordering(94) 00:22:09.879 fused_ordering(95) 00:22:09.879 fused_ordering(96) 00:22:09.879 fused_ordering(97) 00:22:09.879 fused_ordering(98) 00:22:09.879 fused_ordering(99) 00:22:09.879 fused_ordering(100) 00:22:09.879 fused_ordering(101) 00:22:09.879 fused_ordering(102) 00:22:09.879 fused_ordering(103) 00:22:09.879 fused_ordering(104) 00:22:09.879 fused_ordering(105) 00:22:09.879 fused_ordering(106) 00:22:09.879 fused_ordering(107) 00:22:09.879 fused_ordering(108) 00:22:09.879 fused_ordering(109) 00:22:09.879 fused_ordering(110) 00:22:09.879 fused_ordering(111) 00:22:09.879 fused_ordering(112) 00:22:09.879 fused_ordering(113) 00:22:09.879 fused_ordering(114) 00:22:09.879 fused_ordering(115) 00:22:09.879 fused_ordering(116) 00:22:09.879 fused_ordering(117) 00:22:09.879 fused_ordering(118) 00:22:09.879 fused_ordering(119) 00:22:09.879 fused_ordering(120) 00:22:09.879 fused_ordering(121) 00:22:09.879 fused_ordering(122) 00:22:09.879 fused_ordering(123) 00:22:09.879 fused_ordering(124) 00:22:09.879 fused_ordering(125) 00:22:09.879 fused_ordering(126) 00:22:09.879 fused_ordering(127) 00:22:09.879 fused_ordering(128) 00:22:09.879 fused_ordering(129) 00:22:09.879 fused_ordering(130) 00:22:09.879 fused_ordering(131) 00:22:09.879 fused_ordering(132) 00:22:09.879 fused_ordering(133) 00:22:09.879 fused_ordering(134) 00:22:09.879 fused_ordering(135) 00:22:09.879 fused_ordering(136) 00:22:09.879 fused_ordering(137) 00:22:09.879 fused_ordering(138) 00:22:09.879 fused_ordering(139) 00:22:09.879 fused_ordering(140) 00:22:09.879 fused_ordering(141) 00:22:09.879 fused_ordering(142) 00:22:09.879 fused_ordering(143) 00:22:09.879 fused_ordering(144) 00:22:09.879 fused_ordering(145) 00:22:09.879 fused_ordering(146) 00:22:09.879 fused_ordering(147) 00:22:09.879 fused_ordering(148) 00:22:09.879 fused_ordering(149) 00:22:09.879 fused_ordering(150) 00:22:09.879 fused_ordering(151) 00:22:09.879 fused_ordering(152) 00:22:09.879 fused_ordering(153) 00:22:09.879 fused_ordering(154) 00:22:09.879 fused_ordering(155) 00:22:09.879 fused_ordering(156) 00:22:09.879 fused_ordering(157) 00:22:09.879 fused_ordering(158) 00:22:09.879 fused_ordering(159) 00:22:09.879 fused_ordering(160) 00:22:09.879 fused_ordering(161) 00:22:09.879 fused_ordering(162) 00:22:09.879 fused_ordering(163) 00:22:09.879 fused_ordering(164) 00:22:09.879 fused_ordering(165) 00:22:09.879 fused_ordering(166) 00:22:09.879 fused_ordering(167) 00:22:09.879 fused_ordering(168) 00:22:09.879 fused_ordering(169) 00:22:09.879 fused_ordering(170) 00:22:09.879 fused_ordering(171) 00:22:09.879 fused_ordering(172) 00:22:09.879 fused_ordering(173) 00:22:09.879 fused_ordering(174) 00:22:09.879 fused_ordering(175) 00:22:09.879 fused_ordering(176) 00:22:09.879 fused_ordering(177) 00:22:09.879 fused_ordering(178) 00:22:09.879 fused_ordering(179) 00:22:09.879 fused_ordering(180) 00:22:09.879 fused_ordering(181) 00:22:09.879 fused_ordering(182) 00:22:09.879 fused_ordering(183) 00:22:09.879 fused_ordering(184) 00:22:09.879 fused_ordering(185) 00:22:09.879 fused_ordering(186) 00:22:09.879 fused_ordering(187) 00:22:09.879 fused_ordering(188) 00:22:09.879 fused_ordering(189) 00:22:09.879 fused_ordering(190) 00:22:09.879 fused_ordering(191) 00:22:09.879 fused_ordering(192) 00:22:09.879 fused_ordering(193) 00:22:09.879 fused_ordering(194) 00:22:09.879 fused_ordering(195) 00:22:09.879 fused_ordering(196) 00:22:09.879 fused_ordering(197) 00:22:09.879 fused_ordering(198) 00:22:09.879 fused_ordering(199) 00:22:09.879 fused_ordering(200) 00:22:09.879 fused_ordering(201) 00:22:09.879 fused_ordering(202) 00:22:09.879 fused_ordering(203) 00:22:09.879 fused_ordering(204) 00:22:09.879 fused_ordering(205) 00:22:10.450 fused_ordering(206) 00:22:10.450 fused_ordering(207) 00:22:10.450 fused_ordering(208) 00:22:10.450 fused_ordering(209) 00:22:10.450 fused_ordering(210) 00:22:10.450 fused_ordering(211) 00:22:10.450 fused_ordering(212) 00:22:10.450 fused_ordering(213) 00:22:10.450 fused_ordering(214) 00:22:10.450 fused_ordering(215) 00:22:10.450 fused_ordering(216) 00:22:10.450 fused_ordering(217) 00:22:10.450 fused_ordering(218) 00:22:10.450 fused_ordering(219) 00:22:10.450 fused_ordering(220) 00:22:10.450 fused_ordering(221) 00:22:10.450 fused_ordering(222) 00:22:10.450 fused_ordering(223) 00:22:10.450 fused_ordering(224) 00:22:10.450 fused_ordering(225) 00:22:10.450 fused_ordering(226) 00:22:10.450 fused_ordering(227) 00:22:10.450 fused_ordering(228) 00:22:10.450 fused_ordering(229) 00:22:10.450 fused_ordering(230) 00:22:10.450 fused_ordering(231) 00:22:10.450 fused_ordering(232) 00:22:10.450 fused_ordering(233) 00:22:10.450 fused_ordering(234) 00:22:10.450 fused_ordering(235) 00:22:10.450 fused_ordering(236) 00:22:10.450 fused_ordering(237) 00:22:10.450 fused_ordering(238) 00:22:10.450 fused_ordering(239) 00:22:10.450 fused_ordering(240) 00:22:10.450 fused_ordering(241) 00:22:10.450 fused_ordering(242) 00:22:10.450 fused_ordering(243) 00:22:10.450 fused_ordering(244) 00:22:10.450 fused_ordering(245) 00:22:10.450 fused_ordering(246) 00:22:10.450 fused_ordering(247) 00:22:10.450 fused_ordering(248) 00:22:10.450 fused_ordering(249) 00:22:10.450 fused_ordering(250) 00:22:10.450 fused_ordering(251) 00:22:10.450 fused_ordering(252) 00:22:10.450 fused_ordering(253) 00:22:10.450 fused_ordering(254) 00:22:10.450 fused_ordering(255) 00:22:10.450 fused_ordering(256) 00:22:10.450 fused_ordering(257) 00:22:10.450 fused_ordering(258) 00:22:10.450 fused_ordering(259) 00:22:10.450 fused_ordering(260) 00:22:10.450 fused_ordering(261) 00:22:10.450 fused_ordering(262) 00:22:10.450 fused_ordering(263) 00:22:10.450 fused_ordering(264) 00:22:10.450 fused_ordering(265) 00:22:10.450 fused_ordering(266) 00:22:10.450 fused_ordering(267) 00:22:10.450 fused_ordering(268) 00:22:10.450 fused_ordering(269) 00:22:10.450 fused_ordering(270) 00:22:10.450 fused_ordering(271) 00:22:10.450 fused_ordering(272) 00:22:10.450 fused_ordering(273) 00:22:10.450 fused_ordering(274) 00:22:10.450 fused_ordering(275) 00:22:10.450 fused_ordering(276) 00:22:10.450 fused_ordering(277) 00:22:10.450 fused_ordering(278) 00:22:10.450 fused_ordering(279) 00:22:10.450 fused_ordering(280) 00:22:10.451 fused_ordering(281) 00:22:10.451 fused_ordering(282) 00:22:10.451 fused_ordering(283) 00:22:10.451 fused_ordering(284) 00:22:10.451 fused_ordering(285) 00:22:10.451 fused_ordering(286) 00:22:10.451 fused_ordering(287) 00:22:10.451 fused_ordering(288) 00:22:10.451 fused_ordering(289) 00:22:10.451 fused_ordering(290) 00:22:10.451 fused_ordering(291) 00:22:10.451 fused_ordering(292) 00:22:10.451 fused_ordering(293) 00:22:10.451 fused_ordering(294) 00:22:10.451 fused_ordering(295) 00:22:10.451 fused_ordering(296) 00:22:10.451 fused_ordering(297) 00:22:10.451 fused_ordering(298) 00:22:10.451 fused_ordering(299) 00:22:10.451 fused_ordering(300) 00:22:10.451 fused_ordering(301) 00:22:10.451 fused_ordering(302) 00:22:10.451 fused_ordering(303) 00:22:10.451 fused_ordering(304) 00:22:10.451 fused_ordering(305) 00:22:10.451 fused_ordering(306) 00:22:10.451 fused_ordering(307) 00:22:10.451 fused_ordering(308) 00:22:10.451 fused_ordering(309) 00:22:10.451 fused_ordering(310) 00:22:10.451 fused_ordering(311) 00:22:10.451 fused_ordering(312) 00:22:10.451 fused_ordering(313) 00:22:10.451 fused_ordering(314) 00:22:10.451 fused_ordering(315) 00:22:10.451 fused_ordering(316) 00:22:10.451 fused_ordering(317) 00:22:10.451 fused_ordering(318) 00:22:10.451 fused_ordering(319) 00:22:10.451 fused_ordering(320) 00:22:10.451 fused_ordering(321) 00:22:10.451 fused_ordering(322) 00:22:10.451 fused_ordering(323) 00:22:10.451 fused_ordering(324) 00:22:10.451 fused_ordering(325) 00:22:10.451 fused_ordering(326) 00:22:10.451 fused_ordering(327) 00:22:10.451 fused_ordering(328) 00:22:10.451 fused_ordering(329) 00:22:10.451 fused_ordering(330) 00:22:10.451 fused_ordering(331) 00:22:10.451 fused_ordering(332) 00:22:10.451 fused_ordering(333) 00:22:10.451 fused_ordering(334) 00:22:10.451 fused_ordering(335) 00:22:10.451 fused_ordering(336) 00:22:10.451 fused_ordering(337) 00:22:10.451 fused_ordering(338) 00:22:10.451 fused_ordering(339) 00:22:10.451 fused_ordering(340) 00:22:10.451 fused_ordering(341) 00:22:10.451 fused_ordering(342) 00:22:10.451 fused_ordering(343) 00:22:10.451 fused_ordering(344) 00:22:10.451 fused_ordering(345) 00:22:10.451 fused_ordering(346) 00:22:10.451 fused_ordering(347) 00:22:10.451 fused_ordering(348) 00:22:10.451 fused_ordering(349) 00:22:10.451 fused_ordering(350) 00:22:10.451 fused_ordering(351) 00:22:10.451 fused_ordering(352) 00:22:10.451 fused_ordering(353) 00:22:10.451 fused_ordering(354) 00:22:10.451 fused_ordering(355) 00:22:10.451 fused_ordering(356) 00:22:10.451 fused_ordering(357) 00:22:10.451 fused_ordering(358) 00:22:10.451 fused_ordering(359) 00:22:10.451 fused_ordering(360) 00:22:10.451 fused_ordering(361) 00:22:10.451 fused_ordering(362) 00:22:10.451 fused_ordering(363) 00:22:10.451 fused_ordering(364) 00:22:10.451 fused_ordering(365) 00:22:10.451 fused_ordering(366) 00:22:10.451 fused_ordering(367) 00:22:10.451 fused_ordering(368) 00:22:10.451 fused_ordering(369) 00:22:10.451 fused_ordering(370) 00:22:10.451 fused_ordering(371) 00:22:10.451 fused_ordering(372) 00:22:10.451 fused_ordering(373) 00:22:10.451 fused_ordering(374) 00:22:10.451 fused_ordering(375) 00:22:10.451 fused_ordering(376) 00:22:10.451 fused_ordering(377) 00:22:10.451 fused_ordering(378) 00:22:10.451 fused_ordering(379) 00:22:10.451 fused_ordering(380) 00:22:10.451 fused_ordering(381) 00:22:10.451 fused_ordering(382) 00:22:10.451 fused_ordering(383) 00:22:10.451 fused_ordering(384) 00:22:10.451 fused_ordering(385) 00:22:10.451 fused_ordering(386) 00:22:10.451 fused_ordering(387) 00:22:10.451 fused_ordering(388) 00:22:10.451 fused_ordering(389) 00:22:10.451 fused_ordering(390) 00:22:10.451 fused_ordering(391) 00:22:10.451 fused_ordering(392) 00:22:10.451 fused_ordering(393) 00:22:10.451 fused_ordering(394) 00:22:10.451 fused_ordering(395) 00:22:10.451 fused_ordering(396) 00:22:10.451 fused_ordering(397) 00:22:10.451 fused_ordering(398) 00:22:10.451 fused_ordering(399) 00:22:10.451 fused_ordering(400) 00:22:10.451 fused_ordering(401) 00:22:10.451 fused_ordering(402) 00:22:10.451 fused_ordering(403) 00:22:10.451 fused_ordering(404) 00:22:10.451 fused_ordering(405) 00:22:10.451 fused_ordering(406) 00:22:10.451 fused_ordering(407) 00:22:10.451 fused_ordering(408) 00:22:10.451 fused_ordering(409) 00:22:10.451 fused_ordering(410) 00:22:11.021 fused_ordering(411) 00:22:11.021 fused_ordering(412) 00:22:11.021 fused_ordering(413) 00:22:11.021 fused_ordering(414) 00:22:11.021 fused_ordering(415) 00:22:11.021 fused_ordering(416) 00:22:11.021 fused_ordering(417) 00:22:11.021 fused_ordering(418) 00:22:11.021 fused_ordering(419) 00:22:11.021 fused_ordering(420) 00:22:11.021 fused_ordering(421) 00:22:11.021 fused_ordering(422) 00:22:11.021 fused_ordering(423) 00:22:11.021 fused_ordering(424) 00:22:11.021 fused_ordering(425) 00:22:11.021 fused_ordering(426) 00:22:11.021 fused_ordering(427) 00:22:11.021 fused_ordering(428) 00:22:11.021 fused_ordering(429) 00:22:11.021 fused_ordering(430) 00:22:11.021 fused_ordering(431) 00:22:11.021 fused_ordering(432) 00:22:11.021 fused_ordering(433) 00:22:11.021 fused_ordering(434) 00:22:11.021 fused_ordering(435) 00:22:11.021 fused_ordering(436) 00:22:11.021 fused_ordering(437) 00:22:11.021 fused_ordering(438) 00:22:11.021 fused_ordering(439) 00:22:11.021 fused_ordering(440) 00:22:11.021 fused_ordering(441) 00:22:11.021 fused_ordering(442) 00:22:11.021 fused_ordering(443) 00:22:11.021 fused_ordering(444) 00:22:11.021 fused_ordering(445) 00:22:11.021 fused_ordering(446) 00:22:11.021 fused_ordering(447) 00:22:11.021 fused_ordering(448) 00:22:11.021 fused_ordering(449) 00:22:11.021 fused_ordering(450) 00:22:11.021 fused_ordering(451) 00:22:11.021 fused_ordering(452) 00:22:11.021 fused_ordering(453) 00:22:11.021 fused_ordering(454) 00:22:11.021 fused_ordering(455) 00:22:11.021 fused_ordering(456) 00:22:11.021 fused_ordering(457) 00:22:11.021 fused_ordering(458) 00:22:11.021 fused_ordering(459) 00:22:11.021 fused_ordering(460) 00:22:11.021 fused_ordering(461) 00:22:11.021 fused_ordering(462) 00:22:11.021 fused_ordering(463) 00:22:11.021 fused_ordering(464) 00:22:11.021 fused_ordering(465) 00:22:11.021 fused_ordering(466) 00:22:11.021 fused_ordering(467) 00:22:11.021 fused_ordering(468) 00:22:11.021 fused_ordering(469) 00:22:11.021 fused_ordering(470) 00:22:11.021 fused_ordering(471) 00:22:11.021 fused_ordering(472) 00:22:11.021 fused_ordering(473) 00:22:11.021 fused_ordering(474) 00:22:11.021 fused_ordering(475) 00:22:11.021 fused_ordering(476) 00:22:11.021 fused_ordering(477) 00:22:11.021 fused_ordering(478) 00:22:11.021 fused_ordering(479) 00:22:11.021 fused_ordering(480) 00:22:11.021 fused_ordering(481) 00:22:11.021 fused_ordering(482) 00:22:11.021 fused_ordering(483) 00:22:11.021 fused_ordering(484) 00:22:11.021 fused_ordering(485) 00:22:11.021 fused_ordering(486) 00:22:11.021 fused_ordering(487) 00:22:11.021 fused_ordering(488) 00:22:11.021 fused_ordering(489) 00:22:11.021 fused_ordering(490) 00:22:11.021 fused_ordering(491) 00:22:11.021 fused_ordering(492) 00:22:11.021 fused_ordering(493) 00:22:11.021 fused_ordering(494) 00:22:11.021 fused_ordering(495) 00:22:11.021 fused_ordering(496) 00:22:11.021 fused_ordering(497) 00:22:11.021 fused_ordering(498) 00:22:11.021 fused_ordering(499) 00:22:11.021 fused_ordering(500) 00:22:11.021 fused_ordering(501) 00:22:11.021 fused_ordering(502) 00:22:11.021 fused_ordering(503) 00:22:11.021 fused_ordering(504) 00:22:11.021 fused_ordering(505) 00:22:11.021 fused_ordering(506) 00:22:11.021 fused_ordering(507) 00:22:11.021 fused_ordering(508) 00:22:11.021 fused_ordering(509) 00:22:11.021 fused_ordering(510) 00:22:11.021 fused_ordering(511) 00:22:11.021 fused_ordering(512) 00:22:11.021 fused_ordering(513) 00:22:11.021 fused_ordering(514) 00:22:11.021 fused_ordering(515) 00:22:11.021 fused_ordering(516) 00:22:11.021 fused_ordering(517) 00:22:11.021 fused_ordering(518) 00:22:11.021 fused_ordering(519) 00:22:11.021 fused_ordering(520) 00:22:11.021 fused_ordering(521) 00:22:11.021 fused_ordering(522) 00:22:11.021 fused_ordering(523) 00:22:11.021 fused_ordering(524) 00:22:11.021 fused_ordering(525) 00:22:11.021 fused_ordering(526) 00:22:11.021 fused_ordering(527) 00:22:11.021 fused_ordering(528) 00:22:11.021 fused_ordering(529) 00:22:11.021 fused_ordering(530) 00:22:11.021 fused_ordering(531) 00:22:11.021 fused_ordering(532) 00:22:11.021 fused_ordering(533) 00:22:11.021 fused_ordering(534) 00:22:11.021 fused_ordering(535) 00:22:11.021 fused_ordering(536) 00:22:11.021 fused_ordering(537) 00:22:11.021 fused_ordering(538) 00:22:11.021 fused_ordering(539) 00:22:11.021 fused_ordering(540) 00:22:11.021 fused_ordering(541) 00:22:11.021 fused_ordering(542) 00:22:11.021 fused_ordering(543) 00:22:11.021 fused_ordering(544) 00:22:11.021 fused_ordering(545) 00:22:11.021 fused_ordering(546) 00:22:11.021 fused_ordering(547) 00:22:11.021 fused_ordering(548) 00:22:11.021 fused_ordering(549) 00:22:11.021 fused_ordering(550) 00:22:11.021 fused_ordering(551) 00:22:11.021 fused_ordering(552) 00:22:11.021 fused_ordering(553) 00:22:11.021 fused_ordering(554) 00:22:11.021 fused_ordering(555) 00:22:11.021 fused_ordering(556) 00:22:11.021 fused_ordering(557) 00:22:11.021 fused_ordering(558) 00:22:11.021 fused_ordering(559) 00:22:11.021 fused_ordering(560) 00:22:11.021 fused_ordering(561) 00:22:11.021 fused_ordering(562) 00:22:11.021 fused_ordering(563) 00:22:11.021 fused_ordering(564) 00:22:11.021 fused_ordering(565) 00:22:11.021 fused_ordering(566) 00:22:11.021 fused_ordering(567) 00:22:11.021 fused_ordering(568) 00:22:11.021 fused_ordering(569) 00:22:11.021 fused_ordering(570) 00:22:11.021 fused_ordering(571) 00:22:11.021 fused_ordering(572) 00:22:11.021 fused_ordering(573) 00:22:11.021 fused_ordering(574) 00:22:11.021 fused_ordering(575) 00:22:11.021 fused_ordering(576) 00:22:11.021 fused_ordering(577) 00:22:11.021 fused_ordering(578) 00:22:11.021 fused_ordering(579) 00:22:11.021 fused_ordering(580) 00:22:11.021 fused_ordering(581) 00:22:11.021 fused_ordering(582) 00:22:11.021 fused_ordering(583) 00:22:11.021 fused_ordering(584) 00:22:11.021 fused_ordering(585) 00:22:11.021 fused_ordering(586) 00:22:11.021 fused_ordering(587) 00:22:11.021 fused_ordering(588) 00:22:11.021 fused_ordering(589) 00:22:11.021 fused_ordering(590) 00:22:11.021 fused_ordering(591) 00:22:11.021 fused_ordering(592) 00:22:11.021 fused_ordering(593) 00:22:11.021 fused_ordering(594) 00:22:11.021 fused_ordering(595) 00:22:11.021 fused_ordering(596) 00:22:11.021 fused_ordering(597) 00:22:11.021 fused_ordering(598) 00:22:11.021 fused_ordering(599) 00:22:11.021 fused_ordering(600) 00:22:11.021 fused_ordering(601) 00:22:11.021 fused_ordering(602) 00:22:11.021 fused_ordering(603) 00:22:11.021 fused_ordering(604) 00:22:11.021 fused_ordering(605) 00:22:11.021 fused_ordering(606) 00:22:11.021 fused_ordering(607) 00:22:11.021 fused_ordering(608) 00:22:11.021 fused_ordering(609) 00:22:11.021 fused_ordering(610) 00:22:11.021 fused_ordering(611) 00:22:11.021 fused_ordering(612) 00:22:11.021 fused_ordering(613) 00:22:11.021 fused_ordering(614) 00:22:11.022 fused_ordering(615) 00:22:11.591 fused_ordering(616) 00:22:11.591 fused_ordering(617) 00:22:11.591 fused_ordering(618) 00:22:11.591 fused_ordering(619) 00:22:11.591 fused_ordering(620) 00:22:11.591 fused_ordering(621) 00:22:11.591 fused_ordering(622) 00:22:11.591 fused_ordering(623) 00:22:11.591 fused_ordering(624) 00:22:11.591 fused_ordering(625) 00:22:11.591 fused_ordering(626) 00:22:11.591 fused_ordering(627) 00:22:11.591 fused_ordering(628) 00:22:11.591 fused_ordering(629) 00:22:11.591 fused_ordering(630) 00:22:11.591 fused_ordering(631) 00:22:11.591 fused_ordering(632) 00:22:11.591 fused_ordering(633) 00:22:11.591 fused_ordering(634) 00:22:11.591 fused_ordering(635) 00:22:11.591 fused_ordering(636) 00:22:11.591 fused_ordering(637) 00:22:11.591 fused_ordering(638) 00:22:11.591 fused_ordering(639) 00:22:11.591 fused_ordering(640) 00:22:11.591 fused_ordering(641) 00:22:11.591 fused_ordering(642) 00:22:11.591 fused_ordering(643) 00:22:11.591 fused_ordering(644) 00:22:11.591 fused_ordering(645) 00:22:11.591 fused_ordering(646) 00:22:11.591 fused_ordering(647) 00:22:11.591 fused_ordering(648) 00:22:11.591 fused_ordering(649) 00:22:11.591 fused_ordering(650) 00:22:11.591 fused_ordering(651) 00:22:11.592 fused_ordering(652) 00:22:11.592 fused_ordering(653) 00:22:11.592 fused_ordering(654) 00:22:11.592 fused_ordering(655) 00:22:11.592 fused_ordering(656) 00:22:11.592 fused_ordering(657) 00:22:11.592 fused_ordering(658) 00:22:11.592 fused_ordering(659) 00:22:11.592 fused_ordering(660) 00:22:11.592 fused_ordering(661) 00:22:11.592 fused_ordering(662) 00:22:11.592 fused_ordering(663) 00:22:11.592 fused_ordering(664) 00:22:11.592 fused_ordering(665) 00:22:11.592 fused_ordering(666) 00:22:11.592 fused_ordering(667) 00:22:11.592 fused_ordering(668) 00:22:11.592 fused_ordering(669) 00:22:11.592 fused_ordering(670) 00:22:11.592 fused_ordering(671) 00:22:11.592 fused_ordering(672) 00:22:11.592 fused_ordering(673) 00:22:11.592 fused_ordering(674) 00:22:11.592 fused_ordering(675) 00:22:11.592 fused_ordering(676) 00:22:11.592 fused_ordering(677) 00:22:11.592 fused_ordering(678) 00:22:11.592 fused_ordering(679) 00:22:11.592 fused_ordering(680) 00:22:11.592 fused_ordering(681) 00:22:11.592 fused_ordering(682) 00:22:11.592 fused_ordering(683) 00:22:11.592 fused_ordering(684) 00:22:11.592 fused_ordering(685) 00:22:11.592 fused_ordering(686) 00:22:11.592 fused_ordering(687) 00:22:11.592 fused_ordering(688) 00:22:11.592 fused_ordering(689) 00:22:11.592 fused_ordering(690) 00:22:11.592 fused_ordering(691) 00:22:11.592 fused_ordering(692) 00:22:11.592 fused_ordering(693) 00:22:11.592 fused_ordering(694) 00:22:11.592 fused_ordering(695) 00:22:11.592 fused_ordering(696) 00:22:11.592 fused_ordering(697) 00:22:11.592 fused_ordering(698) 00:22:11.592 fused_ordering(699) 00:22:11.592 fused_ordering(700) 00:22:11.592 fused_ordering(701) 00:22:11.592 fused_ordering(702) 00:22:11.592 fused_ordering(703) 00:22:11.592 fused_ordering(704) 00:22:11.592 fused_ordering(705) 00:22:11.592 fused_ordering(706) 00:22:11.592 fused_ordering(707) 00:22:11.592 fused_ordering(708) 00:22:11.592 fused_ordering(709) 00:22:11.592 fused_ordering(710) 00:22:11.592 fused_ordering(711) 00:22:11.592 fused_ordering(712) 00:22:11.592 fused_ordering(713) 00:22:11.592 fused_ordering(714) 00:22:11.592 fused_ordering(715) 00:22:11.592 fused_ordering(716) 00:22:11.592 fused_ordering(717) 00:22:11.592 fused_ordering(718) 00:22:11.592 fused_ordering(719) 00:22:11.592 fused_ordering(720) 00:22:11.592 fused_ordering(721) 00:22:11.592 fused_ordering(722) 00:22:11.592 fused_ordering(723) 00:22:11.592 fused_ordering(724) 00:22:11.592 fused_ordering(725) 00:22:11.592 fused_ordering(726) 00:22:11.592 fused_ordering(727) 00:22:11.592 fused_ordering(728) 00:22:11.592 fused_ordering(729) 00:22:11.592 fused_ordering(730) 00:22:11.592 fused_ordering(731) 00:22:11.592 fused_ordering(732) 00:22:11.592 fused_ordering(733) 00:22:11.592 fused_ordering(734) 00:22:11.592 fused_ordering(735) 00:22:11.592 fused_ordering(736) 00:22:11.592 fused_ordering(737) 00:22:11.592 fused_ordering(738) 00:22:11.592 fused_ordering(739) 00:22:11.592 fused_ordering(740) 00:22:11.592 fused_ordering(741) 00:22:11.592 fused_ordering(742) 00:22:11.592 fused_ordering(743) 00:22:11.592 fused_ordering(744) 00:22:11.592 fused_ordering(745) 00:22:11.592 fused_ordering(746) 00:22:11.592 fused_ordering(747) 00:22:11.592 fused_ordering(748) 00:22:11.592 fused_ordering(749) 00:22:11.592 fused_ordering(750) 00:22:11.592 fused_ordering(751) 00:22:11.592 fused_ordering(752) 00:22:11.592 fused_ordering(753) 00:22:11.592 fused_ordering(754) 00:22:11.592 fused_ordering(755) 00:22:11.592 fused_ordering(756) 00:22:11.592 fused_ordering(757) 00:22:11.592 fused_ordering(758) 00:22:11.592 fused_ordering(759) 00:22:11.592 fused_ordering(760) 00:22:11.592 fused_ordering(761) 00:22:11.592 fused_ordering(762) 00:22:11.592 fused_ordering(763) 00:22:11.592 fused_ordering(764) 00:22:11.592 fused_ordering(765) 00:22:11.592 fused_ordering(766) 00:22:11.592 fused_ordering(767) 00:22:11.592 fused_ordering(768) 00:22:11.592 fused_ordering(769) 00:22:11.592 fused_ordering(770) 00:22:11.592 fused_ordering(771) 00:22:11.592 fused_ordering(772) 00:22:11.592 fused_ordering(773) 00:22:11.592 fused_ordering(774) 00:22:11.592 fused_ordering(775) 00:22:11.592 fused_ordering(776) 00:22:11.592 fused_ordering(777) 00:22:11.592 fused_ordering(778) 00:22:11.592 fused_ordering(779) 00:22:11.592 fused_ordering(780) 00:22:11.592 fused_ordering(781) 00:22:11.592 fused_ordering(782) 00:22:11.592 fused_ordering(783) 00:22:11.592 fused_ordering(784) 00:22:11.592 fused_ordering(785) 00:22:11.592 fused_ordering(786) 00:22:11.592 fused_ordering(787) 00:22:11.592 fused_ordering(788) 00:22:11.592 fused_ordering(789) 00:22:11.592 fused_ordering(790) 00:22:11.592 fused_ordering(791) 00:22:11.592 fused_ordering(792) 00:22:11.592 fused_ordering(793) 00:22:11.592 fused_ordering(794) 00:22:11.592 fused_ordering(795) 00:22:11.592 fused_ordering(796) 00:22:11.592 fused_ordering(797) 00:22:11.592 fused_ordering(798) 00:22:11.592 fused_ordering(799) 00:22:11.592 fused_ordering(800) 00:22:11.592 fused_ordering(801) 00:22:11.592 fused_ordering(802) 00:22:11.592 fused_ordering(803) 00:22:11.592 fused_ordering(804) 00:22:11.592 fused_ordering(805) 00:22:11.592 fused_ordering(806) 00:22:11.592 fused_ordering(807) 00:22:11.592 fused_ordering(808) 00:22:11.592 fused_ordering(809) 00:22:11.592 fused_ordering(810) 00:22:11.592 fused_ordering(811) 00:22:11.592 fused_ordering(812) 00:22:11.592 fused_ordering(813) 00:22:11.592 fused_ordering(814) 00:22:11.592 fused_ordering(815) 00:22:11.592 fused_ordering(816) 00:22:11.592 fused_ordering(817) 00:22:11.592 fused_ordering(818) 00:22:11.592 fused_ordering(819) 00:22:11.592 fused_ordering(820) 00:22:12.534 fused_ordering(821) 00:22:12.534 fused_ordering(822) 00:22:12.534 fused_ordering(823) 00:22:12.534 fused_ordering(824) 00:22:12.534 fused_ordering(825) 00:22:12.534 fused_ordering(826) 00:22:12.534 fused_ordering(827) 00:22:12.534 fused_ordering(828) 00:22:12.534 fused_ordering(829) 00:22:12.534 fused_ordering(830) 00:22:12.534 fused_ordering(831) 00:22:12.534 fused_ordering(832) 00:22:12.534 fused_ordering(833) 00:22:12.534 fused_ordering(834) 00:22:12.534 fused_ordering(835) 00:22:12.534 fused_ordering(836) 00:22:12.534 fused_ordering(837) 00:22:12.534 fused_ordering(838) 00:22:12.534 fused_ordering(839) 00:22:12.534 fused_ordering(840) 00:22:12.534 fused_ordering(841) 00:22:12.534 fused_ordering(842) 00:22:12.534 fused_ordering(843) 00:22:12.534 fused_ordering(844) 00:22:12.534 fused_ordering(845) 00:22:12.534 fused_ordering(846) 00:22:12.534 fused_ordering(847) 00:22:12.534 fused_ordering(848) 00:22:12.534 fused_ordering(849) 00:22:12.534 fused_ordering(850) 00:22:12.534 fused_ordering(851) 00:22:12.534 fused_ordering(852) 00:22:12.534 fused_ordering(853) 00:22:12.534 fused_ordering(854) 00:22:12.534 fused_ordering(855) 00:22:12.534 fused_ordering(856) 00:22:12.534 fused_ordering(857) 00:22:12.534 fused_ordering(858) 00:22:12.534 fused_ordering(859) 00:22:12.534 fused_ordering(860) 00:22:12.534 fused_ordering(861) 00:22:12.534 fused_ordering(862) 00:22:12.534 fused_ordering(863) 00:22:12.534 fused_ordering(864) 00:22:12.534 fused_ordering(865) 00:22:12.534 fused_ordering(866) 00:22:12.534 fused_ordering(867) 00:22:12.534 fused_ordering(868) 00:22:12.534 fused_ordering(869) 00:22:12.534 fused_ordering(870) 00:22:12.534 fused_ordering(871) 00:22:12.534 fused_ordering(872) 00:22:12.534 fused_ordering(873) 00:22:12.534 fused_ordering(874) 00:22:12.534 fused_ordering(875) 00:22:12.534 fused_ordering(876) 00:22:12.534 fused_ordering(877) 00:22:12.534 fused_ordering(878) 00:22:12.534 fused_ordering(879) 00:22:12.534 fused_ordering(880) 00:22:12.534 fused_ordering(881) 00:22:12.534 fused_ordering(882) 00:22:12.534 fused_ordering(883) 00:22:12.534 fused_ordering(884) 00:22:12.534 fused_ordering(885) 00:22:12.534 fused_ordering(886) 00:22:12.534 fused_ordering(887) 00:22:12.534 fused_ordering(888) 00:22:12.534 fused_ordering(889) 00:22:12.534 fused_ordering(890) 00:22:12.534 fused_ordering(891) 00:22:12.534 fused_ordering(892) 00:22:12.534 fused_ordering(893) 00:22:12.534 fused_ordering(894) 00:22:12.534 fused_ordering(895) 00:22:12.534 fused_ordering(896) 00:22:12.534 fused_ordering(897) 00:22:12.534 fused_ordering(898) 00:22:12.534 fused_ordering(899) 00:22:12.534 fused_ordering(900) 00:22:12.534 fused_ordering(901) 00:22:12.534 fused_ordering(902) 00:22:12.534 fused_ordering(903) 00:22:12.534 fused_ordering(904) 00:22:12.534 fused_ordering(905) 00:22:12.534 fused_ordering(906) 00:22:12.534 fused_ordering(907) 00:22:12.534 fused_ordering(908) 00:22:12.534 fused_ordering(909) 00:22:12.534 fused_ordering(910) 00:22:12.534 fused_ordering(911) 00:22:12.534 fused_ordering(912) 00:22:12.534 fused_ordering(913) 00:22:12.534 fused_ordering(914) 00:22:12.534 fused_ordering(915) 00:22:12.534 fused_ordering(916) 00:22:12.534 fused_ordering(917) 00:22:12.534 fused_ordering(918) 00:22:12.534 fused_ordering(919) 00:22:12.534 fused_ordering(920) 00:22:12.534 fused_ordering(921) 00:22:12.534 fused_ordering(922) 00:22:12.534 fused_ordering(923) 00:22:12.534 fused_ordering(924) 00:22:12.534 fused_ordering(925) 00:22:12.534 fused_ordering(926) 00:22:12.534 fused_ordering(927) 00:22:12.534 fused_ordering(928) 00:22:12.534 fused_ordering(929) 00:22:12.534 fused_ordering(930) 00:22:12.534 fused_ordering(931) 00:22:12.534 fused_ordering(932) 00:22:12.534 fused_ordering(933) 00:22:12.534 fused_ordering(934) 00:22:12.534 fused_ordering(935) 00:22:12.534 fused_ordering(936) 00:22:12.534 fused_ordering(937) 00:22:12.534 fused_ordering(938) 00:22:12.534 fused_ordering(939) 00:22:12.534 fused_ordering(940) 00:22:12.534 fused_ordering(941) 00:22:12.534 fused_ordering(942) 00:22:12.534 fused_ordering(943) 00:22:12.534 fused_ordering(944) 00:22:12.534 fused_ordering(945) 00:22:12.534 fused_ordering(946) 00:22:12.534 fused_ordering(947) 00:22:12.534 fused_ordering(948) 00:22:12.534 fused_ordering(949) 00:22:12.534 fused_ordering(950) 00:22:12.534 fused_ordering(951) 00:22:12.534 fused_ordering(952) 00:22:12.534 fused_ordering(953) 00:22:12.534 fused_ordering(954) 00:22:12.534 fused_ordering(955) 00:22:12.534 fused_ordering(956) 00:22:12.534 fused_ordering(957) 00:22:12.534 fused_ordering(958) 00:22:12.534 fused_ordering(959) 00:22:12.534 fused_ordering(960) 00:22:12.534 fused_ordering(961) 00:22:12.534 fused_ordering(962) 00:22:12.534 fused_ordering(963) 00:22:12.534 fused_ordering(964) 00:22:12.534 fused_ordering(965) 00:22:12.534 fused_ordering(966) 00:22:12.534 fused_ordering(967) 00:22:12.534 fused_ordering(968) 00:22:12.534 fused_ordering(969) 00:22:12.534 fused_ordering(970) 00:22:12.534 fused_ordering(971) 00:22:12.534 fused_ordering(972) 00:22:12.534 fused_ordering(973) 00:22:12.534 fused_ordering(974) 00:22:12.534 fused_ordering(975) 00:22:12.534 fused_ordering(976) 00:22:12.535 fused_ordering(977) 00:22:12.535 fused_ordering(978) 00:22:12.535 fused_ordering(979) 00:22:12.535 fused_ordering(980) 00:22:12.535 fused_ordering(981) 00:22:12.535 fused_ordering(982) 00:22:12.535 fused_ordering(983) 00:22:12.535 fused_ordering(984) 00:22:12.535 fused_ordering(985) 00:22:12.535 fused_ordering(986) 00:22:12.535 fused_ordering(987) 00:22:12.535 fused_ordering(988) 00:22:12.535 fused_ordering(989) 00:22:12.535 fused_ordering(990) 00:22:12.535 fused_ordering(991) 00:22:12.535 fused_ordering(992) 00:22:12.535 fused_ordering(993) 00:22:12.535 fused_ordering(994) 00:22:12.535 fused_ordering(995) 00:22:12.535 fused_ordering(996) 00:22:12.535 fused_ordering(997) 00:22:12.535 fused_ordering(998) 00:22:12.535 fused_ordering(999) 00:22:12.535 fused_ordering(1000) 00:22:12.535 fused_ordering(1001) 00:22:12.535 fused_ordering(1002) 00:22:12.535 fused_ordering(1003) 00:22:12.535 fused_ordering(1004) 00:22:12.535 fused_ordering(1005) 00:22:12.535 fused_ordering(1006) 00:22:12.535 fused_ordering(1007) 00:22:12.535 fused_ordering(1008) 00:22:12.535 fused_ordering(1009) 00:22:12.535 fused_ordering(1010) 00:22:12.535 fused_ordering(1011) 00:22:12.535 fused_ordering(1012) 00:22:12.535 fused_ordering(1013) 00:22:12.535 fused_ordering(1014) 00:22:12.535 fused_ordering(1015) 00:22:12.535 fused_ordering(1016) 00:22:12.535 fused_ordering(1017) 00:22:12.535 fused_ordering(1018) 00:22:12.535 fused_ordering(1019) 00:22:12.535 fused_ordering(1020) 00:22:12.535 fused_ordering(1021) 00:22:12.535 fused_ordering(1022) 00:22:12.535 fused_ordering(1023) 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.535 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.535 rmmod nvme_tcp 00:22:12.795 rmmod nvme_fabrics 00:22:12.795 rmmod nvme_keyring 00:22:12.795 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.795 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:22:12.795 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 875221 ']' 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 875221 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 875221 ']' 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 875221 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 875221 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 875221' 00:22:12.796 killing process with pid 875221 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 875221 00:22:12.796 23:02:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 875221 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.055 23:02:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.601 00:22:15.601 real 0m9.741s 00:22:15.601 user 0m6.504s 00:22:15.601 sys 0m5.050s 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:22:15.601 ************************************ 00:22:15.601 END TEST nvmf_fused_ordering 00:22:15.601 ************************************ 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:15.601 ************************************ 00:22:15.601 START TEST nvmf_ns_masking 00:22:15.601 ************************************ 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:22:15.601 * Looking for test storage... 00:22:15.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:15.601 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=63c491bf-9f0f-4aac-9d47-ddd7489be61d 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5372eeec-ac5b-4acc-b092-2ccf66ec787f 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f6f61070-af16-4ecc-ba44-ce6b58c1a596 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.602 23:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:18.898 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:18.898 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:18.898 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:18.899 Found net devices under 0000:84:00.0: cvl_0_0 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:18.899 Found net devices under 0000:84:00.1: cvl_0_1 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:18.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:22:18.899 00:22:18.899 --- 10.0.0.2 ping statistics --- 00:22:18.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.899 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:18.899 00:22:18.899 --- 10.0.0.1 ping statistics --- 00:22:18.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.899 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=877794 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 877794 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 877794 ']' 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.899 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:18.899 [2024-07-22 23:02:54.808047] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:22:18.899 [2024-07-22 23:02:54.808180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.899 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.899 [2024-07-22 23:02:54.938906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.899 [2024-07-22 23:02:55.089980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.899 [2024-07-22 23:02:55.090087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.899 [2024-07-22 23:02:55.090124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.899 [2024-07-22 23:02:55.090153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.899 [2024-07-22 23:02:55.090180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.899 [2024-07-22 23:02:55.090254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.160 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.160 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:22:19.160 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.160 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:19.160 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:19.160 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.160 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:19.420 [2024-07-22 23:02:55.660801] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.420 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:22:19.420 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:22:19.420 23:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:19.989 Malloc1 00:22:19.989 23:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:22:20.564 Malloc2 00:22:20.564 23:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:21.131 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:22:21.390 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.960 [2024-07-22 23:02:58.109084] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.960 23:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:22:21.960 23:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f6f61070-af16-4ecc-ba44-ce6b58c1a596 -a 10.0.0.2 -s 4420 -i 4 00:22:22.220 23:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:22:22.220 23:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:22:22.220 23:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.220 23:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:22.220 23:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:24.130 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:24.389 [ 0]:0x1 00:22:24.389 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:24.389 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:24.389 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d1ed12ad4e4a26a5bc4ca0924a9083 00:22:24.389 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d1ed12ad4e4a26a5bc4ca0924a9083 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:24.389 23:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:24.960 [ 0]:0x1 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d1ed12ad4e4a26a5bc4ca0924a9083 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d1ed12ad4e4a26a5bc4ca0924a9083 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:24.960 [ 1]:0x2 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0118c8c308384f49986843fe8173e2b7 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0118c8c308384f49986843fe8173e2b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:24.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:24.960 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:25.529 23:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f6f61070-af16-4ecc-ba44-ce6b58c1a596 -a 10.0.0.2 -s 4420 -i 4 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:22:26.100 23:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:22:28.010 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:28.010 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:28.010 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:28.010 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:28.010 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:28.010 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:22:28.269 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:22:28.269 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:28.269 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:22:28.269 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:28.270 [ 0]:0x2 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0118c8c308384f49986843fe8173e2b7 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0118c8c308384f49986843fe8173e2b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:28.270 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:28.838 [ 0]:0x1 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d1ed12ad4e4a26a5bc4ca0924a9083 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d1ed12ad4e4a26a5bc4ca0924a9083 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:28.838 [ 1]:0x2 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:28.838 23:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:28.838 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0118c8c308384f49986843fe8173e2b7 00:22:28.838 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0118c8c308384f49986843fe8173e2b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:28.838 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:29.410 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:29.671 [ 0]:0x2 00:22:29.671 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:29.671 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:29.671 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0118c8c308384f49986843fe8173e2b7 00:22:29.671 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0118c8c308384f49986843fe8173e2b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:29.671 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:22:29.671 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:29.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:29.671 23:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f6f61070-af16-4ecc-ba44-ce6b58c1a596 -a 10.0.0.2 -s 4420 -i 4 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:22:30.242 23:03:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:32.785 [ 0]:0x1 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d1ed12ad4e4a26a5bc4ca0924a9083 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d1ed12ad4e4a26a5bc4ca0924a9083 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:32.785 [ 1]:0x2 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0118c8c308384f49986843fe8173e2b7 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0118c8c308384f49986843fe8173e2b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:32.785 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:33.045 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:33.306 [ 0]:0x2 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0118c8c308384f49986843fe8173e2b7 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0118c8c308384f49986843fe8173e2b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:22:33.306 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:22:33.876 [2024-07-22 23:03:10.033548] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:22:33.876 request: 00:22:33.876 { 00:22:33.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.876 "nsid": 2, 00:22:33.876 "host": "nqn.2016-06.io.spdk:host1", 00:22:33.876 "method": "nvmf_ns_remove_host", 00:22:33.876 "req_id": 1 00:22:33.876 } 00:22:33.876 Got JSON-RPC error response 00:22:33.876 response: 00:22:33.876 { 00:22:33.876 "code": -32602, 00:22:33.876 "message": "Invalid parameters" 00:22:33.876 } 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:33.876 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:22:33.877 [ 0]:0x2 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0118c8c308384f49986843fe8173e2b7 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0118c8c308384f49986843fe8173e2b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:22:33.877 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:34.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=880145 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 880145 /var/tmp/host.sock 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 880145 ']' 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:34.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.137 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:34.137 [2024-07-22 23:03:10.319016] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:22:34.137 [2024-07-22 23:03:10.319143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880145 ] 00:22:34.137 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.137 [2024-07-22 23:03:10.411514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.397 [2024-07-22 23:03:10.527709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.657 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.657 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:22:34.657 23:03:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:35.235 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:35.805 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 63c491bf-9f0f-4aac-9d47-ddd7489be61d 00:22:35.805 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:22:35.805 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 63C491BF9F0F4AAC9D47DDD7489BE61D -i 00:22:36.746 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5372eeec-ac5b-4acc-b092-2ccf66ec787f 00:22:36.746 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:22:36.746 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5372EEECAC5B4ACCB0922CCF66EC787F -i 00:22:37.007 23:03:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:37.576 23:03:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:22:38.147 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:22:38.147 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:22:38.716 nvme0n1 00:22:38.716 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:22:38.716 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:22:39.284 nvme1n2 00:22:39.284 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:22:39.284 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:22:39.284 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:22:39.284 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:22:39.284 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:22:39.851 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:22:39.851 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:22:39.851 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:22:39.851 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:22:40.110 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 63c491bf-9f0f-4aac-9d47-ddd7489be61d == \6\3\c\4\9\1\b\f\-\9\f\0\f\-\4\a\a\c\-\9\d\4\7\-\d\d\d\7\4\8\9\b\e\6\1\d ]] 00:22:40.110 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:22:40.110 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:22:40.110 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5372eeec-ac5b-4acc-b092-2ccf66ec787f == \5\3\7\2\e\e\e\c\-\a\c\5\b\-\4\a\c\c\-\b\0\9\2\-\2\c\c\f\6\6\e\c\7\8\7\f ]] 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 880145 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 880145 ']' 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 880145 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880145 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880145' 00:22:40.675 killing process with pid 880145 00:22:40.675 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 880145 00:22:40.676 23:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 880145 00:22:40.935 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.870 rmmod nvme_tcp 00:22:41.870 rmmod nvme_fabrics 00:22:41.870 rmmod nvme_keyring 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:22:41.870 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 877794 ']' 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 877794 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 877794 ']' 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 877794 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 877794 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 877794' 00:22:41.871 killing process with pid 877794 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 877794 00:22:41.871 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 877794 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.130 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.669 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.669 00:22:44.669 real 0m29.053s 00:22:44.669 user 0m43.087s 00:22:44.669 sys 0m6.394s 00:22:44.669 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.669 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:44.669 ************************************ 00:22:44.669 END TEST nvmf_ns_masking 00:22:44.669 ************************************ 00:22:44.669 23:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.670 ************************************ 00:22:44.670 START TEST nvmf_nvme_cli 00:22:44.670 ************************************ 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:22:44.670 * Looking for test storage... 00:22:44.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.670 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:47.960 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:47.960 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:47.960 Found net devices under 0000:84:00.0: cvl_0_0 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:47.960 Found net devices under 0000:84:00.1: cvl_0_1 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:22:47.960 00:22:47.960 --- 10.0.0.2 ping statistics --- 00:22:47.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.960 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:47.960 00:22:47.960 --- 10.0.0.1 ping statistics --- 00:22:47.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.960 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=883190 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 883190 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 883190 ']' 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.960 23:03:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:47.960 [2024-07-22 23:03:23.974191] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:22:47.960 [2024-07-22 23:03:23.974398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.960 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.960 [2024-07-22 23:03:24.129604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.218 [2024-07-22 23:03:24.288634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.218 [2024-07-22 23:03:24.288736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.218 [2024-07-22 23:03:24.288772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.218 [2024-07-22 23:03:24.288800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.218 [2024-07-22 23:03:24.288825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.218 [2024-07-22 23:03:24.288979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.218 [2024-07-22 23:03:24.289042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.218 [2024-07-22 23:03:24.289092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.218 [2024-07-22 23:03:24.289094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.218 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.218 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:22:48.218 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.218 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.218 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.476 [2024-07-22 23:03:24.550947] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.476 Malloc0 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.476 Malloc1 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.476 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.477 [2024-07-22 23:03:24.645804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.477 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.477 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:48.477 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.477 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:48.477 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.477 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:22:48.477 00:22:48.477 Discovery Log Number of Records 2, Generation counter 2 00:22:48.477 =====Discovery Log Entry 0====== 00:22:48.477 trtype: tcp 00:22:48.477 adrfam: ipv4 00:22:48.477 subtype: current discovery subsystem 00:22:48.477 treq: not required 00:22:48.477 portid: 0 00:22:48.477 trsvcid: 4420 00:22:48.477 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:48.477 traddr: 10.0.0.2 00:22:48.477 eflags: explicit discovery connections, duplicate discovery information 00:22:48.477 sectype: none 00:22:48.477 =====Discovery Log Entry 1====== 00:22:48.477 trtype: tcp 00:22:48.477 adrfam: ipv4 00:22:48.477 subtype: nvme subsystem 00:22:48.477 treq: not required 00:22:48.477 portid: 0 00:22:48.477 trsvcid: 4420 00:22:48.477 subnqn: nqn.2016-06.io.spdk:cnode1 00:22:48.477 traddr: 10.0.0.2 00:22:48.477 eflags: none 00:22:48.477 sectype: none 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:22:48.756 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:49.329 23:03:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:22:49.329 23:03:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:22:49.329 23:03:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:49.329 23:03:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:22:49.329 23:03:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:22:49.329 23:03:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:22:51.236 /dev/nvme0n1 ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:22:51.236 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:51.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.496 rmmod nvme_tcp 00:22:51.496 rmmod nvme_fabrics 00:22:51.496 rmmod nvme_keyring 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 883190 ']' 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 883190 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 883190 ']' 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 883190 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883190 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883190' 00:22:51.496 killing process with pid 883190 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 883190 00:22:51.496 23:03:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 883190 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.067 23:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.977 00:22:53.977 real 0m9.724s 00:22:53.977 user 0m16.334s 00:22:53.977 sys 0m3.322s 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:53.977 ************************************ 00:22:53.977 END TEST nvmf_nvme_cli 00:22:53.977 ************************************ 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.977 23:03:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:54.238 ************************************ 00:22:54.238 START TEST nvmf_vfio_user 00:22:54.238 ************************************ 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:22:54.238 * Looking for test storage... 00:22:54.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.238 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=883992 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 883992' 00:22:54.239 Process pid: 883992 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 883992 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 883992 ']' 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.239 23:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:22:54.239 [2024-07-22 23:03:30.507814] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:22:54.239 [2024-07-22 23:03:30.507981] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.500 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.500 [2024-07-22 23:03:30.639637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.500 [2024-07-22 23:03:30.794632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.500 [2024-07-22 23:03:30.794732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.500 [2024-07-22 23:03:30.794768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.500 [2024-07-22 23:03:30.794814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.500 [2024-07-22 23:03:30.794841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.500 [2024-07-22 23:03:30.794960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.500 [2024-07-22 23:03:30.795022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.500 [2024-07-22 23:03:30.795104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.500 [2024-07-22 23:03:30.795538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.759 23:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.759 23:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:22:54.759 23:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:22:56.141 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:22:56.403 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:22:56.403 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:22:56.403 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:22:56.403 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:22:56.403 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:56.974 Malloc1 00:22:56.974 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:22:57.542 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:22:58.110 23:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:22:58.369 23:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:22:58.369 23:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:22:58.369 23:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:22:58.937 Malloc2 00:22:58.937 23:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:22:59.506 23:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:23:00.077 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:23:00.649 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:23:00.649 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:23:00.649 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:00.649 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:23:00.649 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:23:00.649 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:23:00.649 [2024-07-22 23:03:36.778225] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:23:00.649 [2024-07-22 23:03:36.778340] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884673 ] 00:23:00.649 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.649 [2024-07-22 23:03:36.836488] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:23:00.649 [2024-07-22 23:03:36.839070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:00.649 [2024-07-22 23:03:36.839108] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa8f69ed000 00:23:00.649 [2024-07-22 23:03:36.840064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.841050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.842052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.843054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.847324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.848078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.849085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.850086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:00.649 [2024-07-22 23:03:36.851090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:00.649 [2024-07-22 23:03:36.851118] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa8f57a1000 00:23:00.649 [2024-07-22 23:03:36.852696] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:00.649 [2024-07-22 23:03:36.871013] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:23:00.649 [2024-07-22 23:03:36.871062] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:23:00.649 [2024-07-22 23:03:36.876251] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:23:00.649 [2024-07-22 23:03:36.876332] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:23:00.649 [2024-07-22 23:03:36.876475] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:23:00.649 [2024-07-22 23:03:36.876513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:23:00.649 [2024-07-22 23:03:36.876528] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:23:00.649 [2024-07-22 23:03:36.877241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:23:00.649 [2024-07-22 23:03:36.877274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:23:00.649 [2024-07-22 23:03:36.877293] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:23:00.649 [2024-07-22 23:03:36.878248] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:23:00.649 [2024-07-22 23:03:36.878277] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:23:00.649 [2024-07-22 23:03:36.878297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:23:00.649 [2024-07-22 23:03:36.879253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:23:00.649 [2024-07-22 23:03:36.879279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:00.649 [2024-07-22 23:03:36.880257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:23:00.649 [2024-07-22 23:03:36.880282] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:23:00.649 [2024-07-22 23:03:36.880295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:23:00.649 [2024-07-22 23:03:36.880316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:00.649 [2024-07-22 23:03:36.880430] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:23:00.649 [2024-07-22 23:03:36.880443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:00.649 [2024-07-22 23:03:36.880454] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:23:00.649 [2024-07-22 23:03:36.882323] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:23:00.649 [2024-07-22 23:03:36.883278] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:23:00.649 [2024-07-22 23:03:36.884282] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:23:00.649 [2024-07-22 23:03:36.885274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:00.649 [2024-07-22 23:03:36.885420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:00.649 [2024-07-22 23:03:36.886294] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:23:00.649 [2024-07-22 23:03:36.886337] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:00.649 [2024-07-22 23:03:36.886354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:23:00.649 [2024-07-22 23:03:36.886388] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:23:00.649 [2024-07-22 23:03:36.886415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:23:00.649 [2024-07-22 23:03:36.886447] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:00.649 [2024-07-22 23:03:36.886460] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:00.649 [2024-07-22 23:03:36.886469] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:00.649 [2024-07-22 23:03:36.886494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:00.649 [2024-07-22 23:03:36.886565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:23:00.649 [2024-07-22 23:03:36.886590] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:23:00.649 [2024-07-22 23:03:36.886602] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:23:00.649 [2024-07-22 23:03:36.886613] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:23:00.649 [2024-07-22 23:03:36.886623] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:23:00.649 [2024-07-22 23:03:36.886634] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:23:00.649 [2024-07-22 23:03:36.886651] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:23:00.649 [2024-07-22 23:03:36.886661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:23:00.649 [2024-07-22 23:03:36.886678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:23:00.649 [2024-07-22 23:03:36.886698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:23:00.649 [2024-07-22 23:03:36.886725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:23:00.649 [2024-07-22 23:03:36.886753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.649 [2024-07-22 23:03:36.886772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.649 [2024-07-22 23:03:36.886788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.649 [2024-07-22 23:03:36.886805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.650 [2024-07-22 23:03:36.886816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.886836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.886856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.886872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.886887] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:23:00.650 [2024-07-22 23:03:36.886898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.886922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.886936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.886954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.886970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887099] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:23:00.650 [2024-07-22 23:03:36.887110] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:23:00.650 [2024-07-22 23:03:36.887118] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:00.650 [2024-07-22 23:03:36.887131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887178] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:23:00.650 [2024-07-22 23:03:36.887203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887238] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:00.650 [2024-07-22 23:03:36.887250] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:00.650 [2024-07-22 23:03:36.887258] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:00.650 [2024-07-22 23:03:36.887271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887409] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:00.650 [2024-07-22 23:03:36.887420] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:00.650 [2024-07-22 23:03:36.887428] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:00.650 [2024-07-22 23:03:36.887442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887568] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:23:00.650 [2024-07-22 23:03:36.887579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:23:00.650 [2024-07-22 23:03:36.887591] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:23:00.650 [2024-07-22 23:03:36.887625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.887804] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:23:00.650 [2024-07-22 23:03:36.887818] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:23:00.650 [2024-07-22 23:03:36.887827] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:23:00.650 [2024-07-22 23:03:36.887835] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:23:00.650 [2024-07-22 23:03:36.887843] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:23:00.650 [2024-07-22 23:03:36.887856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:23:00.650 [2024-07-22 23:03:36.887872] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:23:00.650 [2024-07-22 23:03:36.887883] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:23:00.650 [2024-07-22 23:03:36.887891] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:00.650 [2024-07-22 23:03:36.887904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887925] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:23:00.650 [2024-07-22 23:03:36.887937] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:00.650 [2024-07-22 23:03:36.887945] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:00.650 [2024-07-22 23:03:36.887957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.887973] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:23:00.650 [2024-07-22 23:03:36.887984] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:23:00.650 [2024-07-22 23:03:36.887992] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:00.650 [2024-07-22 23:03:36.888005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:23:00.650 [2024-07-22 23:03:36.888021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.888048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.888072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:23:00.650 [2024-07-22 23:03:36.888088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:23:00.650 ===================================================== 00:23:00.650 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:00.650 ===================================================== 00:23:00.650 Controller Capabilities/Features 00:23:00.650 ================================ 00:23:00.650 Vendor ID: 4e58 00:23:00.650 Subsystem Vendor ID: 4e58 00:23:00.650 Serial Number: SPDK1 00:23:00.650 Model Number: SPDK bdev Controller 00:23:00.650 Firmware Version: 24.09 00:23:00.650 Recommended Arb Burst: 6 00:23:00.650 IEEE OUI Identifier: 8d 6b 50 00:23:00.650 Multi-path I/O 00:23:00.650 May have multiple subsystem ports: Yes 00:23:00.650 May have multiple controllers: Yes 00:23:00.650 Associated with SR-IOV VF: No 00:23:00.650 Max Data Transfer Size: 131072 00:23:00.650 Max Number of Namespaces: 32 00:23:00.651 Max Number of I/O Queues: 127 00:23:00.651 NVMe Specification Version (VS): 1.3 00:23:00.651 NVMe Specification Version (Identify): 1.3 00:23:00.651 Maximum Queue Entries: 256 00:23:00.651 Contiguous Queues Required: Yes 00:23:00.651 Arbitration Mechanisms Supported 00:23:00.651 Weighted Round Robin: Not Supported 00:23:00.651 Vendor Specific: Not Supported 00:23:00.651 Reset Timeout: 15000 ms 00:23:00.651 Doorbell Stride: 4 bytes 00:23:00.651 NVM Subsystem Reset: Not Supported 00:23:00.651 Command Sets Supported 00:23:00.651 NVM Command Set: Supported 00:23:00.651 Boot Partition: Not Supported 00:23:00.651 Memory Page Size Minimum: 4096 bytes 00:23:00.651 Memory Page Size Maximum: 4096 bytes 00:23:00.651 Persistent Memory Region: Not Supported 00:23:00.651 Optional Asynchronous Events Supported 00:23:00.651 Namespace Attribute Notices: Supported 00:23:00.651 Firmware Activation Notices: Not Supported 00:23:00.651 ANA Change Notices: Not Supported 00:23:00.651 PLE Aggregate Log Change Notices: Not Supported 00:23:00.651 LBA Status Info Alert Notices: Not Supported 00:23:00.651 EGE Aggregate Log Change Notices: Not Supported 00:23:00.651 Normal NVM Subsystem Shutdown event: Not Supported 00:23:00.651 Zone Descriptor Change Notices: Not Supported 00:23:00.651 Discovery Log Change Notices: Not Supported 00:23:00.651 Controller Attributes 00:23:00.651 128-bit Host Identifier: Supported 00:23:00.651 Non-Operational Permissive Mode: Not Supported 00:23:00.651 NVM Sets: Not Supported 00:23:00.651 Read Recovery Levels: Not Supported 00:23:00.651 Endurance Groups: Not Supported 00:23:00.651 Predictable Latency Mode: Not Supported 00:23:00.651 Traffic Based Keep ALive: Not Supported 00:23:00.651 Namespace Granularity: Not Supported 00:23:00.651 SQ Associations: Not Supported 00:23:00.651 UUID List: Not Supported 00:23:00.651 Multi-Domain Subsystem: Not Supported 00:23:00.651 Fixed Capacity Management: Not Supported 00:23:00.651 Variable Capacity Management: Not Supported 00:23:00.651 Delete Endurance Group: Not Supported 00:23:00.651 Delete NVM Set: Not Supported 00:23:00.651 Extended LBA Formats Supported: Not Supported 00:23:00.651 Flexible Data Placement Supported: Not Supported 00:23:00.651 00:23:00.651 Controller Memory Buffer Support 00:23:00.651 ================================ 00:23:00.651 Supported: No 00:23:00.651 00:23:00.651 Persistent Memory Region Support 00:23:00.651 ================================ 00:23:00.651 Supported: No 00:23:00.651 00:23:00.651 Admin Command Set Attributes 00:23:00.651 ============================ 00:23:00.651 Security Send/Receive: Not Supported 00:23:00.651 Format NVM: Not Supported 00:23:00.651 Firmware Activate/Download: Not Supported 00:23:00.651 Namespace Management: Not Supported 00:23:00.651 Device Self-Test: Not Supported 00:23:00.651 Directives: Not Supported 00:23:00.651 NVMe-MI: Not Supported 00:23:00.651 Virtualization Management: Not Supported 00:23:00.651 Doorbell Buffer Config: Not Supported 00:23:00.651 Get LBA Status Capability: Not Supported 00:23:00.651 Command & Feature Lockdown Capability: Not Supported 00:23:00.651 Abort Command Limit: 4 00:23:00.651 Async Event Request Limit: 4 00:23:00.651 Number of Firmware Slots: N/A 00:23:00.651 Firmware Slot 1 Read-Only: N/A 00:23:00.651 Firmware Activation Without Reset: N/A 00:23:00.651 Multiple Update Detection Support: N/A 00:23:00.651 Firmware Update Granularity: No Information Provided 00:23:00.651 Per-Namespace SMART Log: No 00:23:00.651 Asymmetric Namespace Access Log Page: Not Supported 00:23:00.651 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:23:00.651 Command Effects Log Page: Supported 00:23:00.651 Get Log Page Extended Data: Supported 00:23:00.651 Telemetry Log Pages: Not Supported 00:23:00.651 Persistent Event Log Pages: Not Supported 00:23:00.651 Supported Log Pages Log Page: May Support 00:23:00.651 Commands Supported & Effects Log Page: Not Supported 00:23:00.651 Feature Identifiers & Effects Log Page:May Support 00:23:00.651 NVMe-MI Commands & Effects Log Page: May Support 00:23:00.651 Data Area 4 for Telemetry Log: Not Supported 00:23:00.651 Error Log Page Entries Supported: 128 00:23:00.651 Keep Alive: Supported 00:23:00.651 Keep Alive Granularity: 10000 ms 00:23:00.651 00:23:00.651 NVM Command Set Attributes 00:23:00.651 ========================== 00:23:00.651 Submission Queue Entry Size 00:23:00.651 Max: 64 00:23:00.651 Min: 64 00:23:00.651 Completion Queue Entry Size 00:23:00.651 Max: 16 00:23:00.651 Min: 16 00:23:00.651 Number of Namespaces: 32 00:23:00.651 Compare Command: Supported 00:23:00.651 Write Uncorrectable Command: Not Supported 00:23:00.651 Dataset Management Command: Supported 00:23:00.651 Write Zeroes Command: Supported 00:23:00.651 Set Features Save Field: Not Supported 00:23:00.651 Reservations: Not Supported 00:23:00.651 Timestamp: Not Supported 00:23:00.651 Copy: Supported 00:23:00.651 Volatile Write Cache: Present 00:23:00.651 Atomic Write Unit (Normal): 1 00:23:00.651 Atomic Write Unit (PFail): 1 00:23:00.651 Atomic Compare & Write Unit: 1 00:23:00.651 Fused Compare & Write: Supported 00:23:00.651 Scatter-Gather List 00:23:00.651 SGL Command Set: Supported (Dword aligned) 00:23:00.651 SGL Keyed: Not Supported 00:23:00.651 SGL Bit Bucket Descriptor: Not Supported 00:23:00.651 SGL Metadata Pointer: Not Supported 00:23:00.651 Oversized SGL: Not Supported 00:23:00.651 SGL Metadata Address: Not Supported 00:23:00.651 SGL Offset: Not Supported 00:23:00.651 Transport SGL Data Block: Not Supported 00:23:00.651 Replay Protected Memory Block: Not Supported 00:23:00.651 00:23:00.651 Firmware Slot Information 00:23:00.651 ========================= 00:23:00.651 Active slot: 1 00:23:00.651 Slot 1 Firmware Revision: 24.09 00:23:00.651 00:23:00.651 00:23:00.651 Commands Supported and Effects 00:23:00.651 ============================== 00:23:00.651 Admin Commands 00:23:00.651 -------------- 00:23:00.651 Get Log Page (02h): Supported 00:23:00.651 Identify (06h): Supported 00:23:00.651 Abort (08h): Supported 00:23:00.651 Set Features (09h): Supported 00:23:00.651 Get Features (0Ah): Supported 00:23:00.651 Asynchronous Event Request (0Ch): Supported 00:23:00.651 Keep Alive (18h): Supported 00:23:00.651 I/O Commands 00:23:00.651 ------------ 00:23:00.651 Flush (00h): Supported LBA-Change 00:23:00.651 Write (01h): Supported LBA-Change 00:23:00.651 Read (02h): Supported 00:23:00.651 Compare (05h): Supported 00:23:00.651 Write Zeroes (08h): Supported LBA-Change 00:23:00.651 Dataset Management (09h): Supported LBA-Change 00:23:00.651 Copy (19h): Supported LBA-Change 00:23:00.651 00:23:00.651 Error Log 00:23:00.651 ========= 00:23:00.651 00:23:00.651 Arbitration 00:23:00.651 =========== 00:23:00.651 Arbitration Burst: 1 00:23:00.651 00:23:00.651 Power Management 00:23:00.651 ================ 00:23:00.651 Number of Power States: 1 00:23:00.651 Current Power State: Power State #0 00:23:00.651 Power State #0: 00:23:00.651 Max Power: 0.00 W 00:23:00.651 Non-Operational State: Operational 00:23:00.651 Entry Latency: Not Reported 00:23:00.651 Exit Latency: Not Reported 00:23:00.651 Relative Read Throughput: 0 00:23:00.651 Relative Read Latency: 0 00:23:00.651 Relative Write Throughput: 0 00:23:00.651 Relative Write Latency: 0 00:23:00.651 Idle Power: Not Reported 00:23:00.651 Active Power: Not Reported 00:23:00.651 Non-Operational Permissive Mode: Not Supported 00:23:00.651 00:23:00.651 Health Information 00:23:00.651 ================== 00:23:00.651 Critical Warnings: 00:23:00.651 Available Spare Space: OK 00:23:00.651 Temperature: OK 00:23:00.651 Device Reliability: OK 00:23:00.651 Read Only: No 00:23:00.651 Volatile Memory Backup: OK 00:23:00.651 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:00.651 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:00.651 Available Spare: 0% 00:23:00.651 Available Sp[2024-07-22 23:03:36.888261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:23:00.651 [2024-07-22 23:03:36.888284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:23:00.651 [2024-07-22 23:03:36.888357] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:23:00.652 [2024-07-22 23:03:36.888385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.652 [2024-07-22 23:03:36.888401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.652 [2024-07-22 23:03:36.888415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.652 [2024-07-22 23:03:36.888429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.652 [2024-07-22 23:03:36.892331] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:23:00.652 [2024-07-22 23:03:36.892373] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:23:00.652 [2024-07-22 23:03:36.893330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:00.652 [2024-07-22 23:03:36.893448] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:23:00.652 [2024-07-22 23:03:36.893476] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:23:00.652 [2024-07-22 23:03:36.894354] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:23:00.652 [2024-07-22 23:03:36.894392] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:23:00.652 [2024-07-22 23:03:36.894467] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:23:00.652 [2024-07-22 23:03:36.897330] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:00.652 are Threshold: 0% 00:23:00.652 Life Percentage Used: 0% 00:23:00.652 Data Units Read: 0 00:23:00.652 Data Units Written: 0 00:23:00.652 Host Read Commands: 0 00:23:00.652 Host Write Commands: 0 00:23:00.652 Controller Busy Time: 0 minutes 00:23:00.652 Power Cycles: 0 00:23:00.652 Power On Hours: 0 hours 00:23:00.652 Unsafe Shutdowns: 0 00:23:00.652 Unrecoverable Media Errors: 0 00:23:00.652 Lifetime Error Log Entries: 0 00:23:00.652 Warning Temperature Time: 0 minutes 00:23:00.652 Critical Temperature Time: 0 minutes 00:23:00.652 00:23:00.652 Number of Queues 00:23:00.652 ================ 00:23:00.652 Number of I/O Submission Queues: 127 00:23:00.652 Number of I/O Completion Queues: 127 00:23:00.652 00:23:00.652 Active Namespaces 00:23:00.652 ================= 00:23:00.652 Namespace ID:1 00:23:00.652 Error Recovery Timeout: Unlimited 00:23:00.652 Command Set Identifier: NVM (00h) 00:23:00.652 Deallocate: Supported 00:23:00.652 Deallocated/Unwritten Error: Not Supported 00:23:00.652 Deallocated Read Value: Unknown 00:23:00.652 Deallocate in Write Zeroes: Not Supported 00:23:00.652 Deallocated Guard Field: 0xFFFF 00:23:00.652 Flush: Supported 00:23:00.652 Reservation: Supported 00:23:00.652 Namespace Sharing Capabilities: Multiple Controllers 00:23:00.652 Size (in LBAs): 131072 (0GiB) 00:23:00.652 Capacity (in LBAs): 131072 (0GiB) 00:23:00.652 Utilization (in LBAs): 131072 (0GiB) 00:23:00.652 NGUID: ED50B53EE47144C3989D9A964E82761F 00:23:00.652 UUID: ed50b53e-e471-44c3-989d-9a964e82761f 00:23:00.652 Thin Provisioning: Not Supported 00:23:00.652 Per-NS Atomic Units: Yes 00:23:00.652 Atomic Boundary Size (Normal): 0 00:23:00.652 Atomic Boundary Size (PFail): 0 00:23:00.652 Atomic Boundary Offset: 0 00:23:00.652 Maximum Single Source Range Length: 65535 00:23:00.652 Maximum Copy Length: 65535 00:23:00.652 Maximum Source Range Count: 1 00:23:00.652 NGUID/EUI64 Never Reused: No 00:23:00.652 Namespace Write Protected: No 00:23:00.652 Number of LBA Formats: 1 00:23:00.652 Current LBA Format: LBA Format #00 00:23:00.652 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:00.652 00:23:00.652 23:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:23:00.912 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.912 [2024-07-22 23:03:37.203229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:06.193 Initializing NVMe Controllers 00:23:06.193 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:06.193 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:23:06.194 Initialization complete. Launching workers. 00:23:06.194 ======================================================== 00:23:06.194 Latency(us) 00:23:06.194 Device Information : IOPS MiB/s Average min max 00:23:06.194 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24060.56 93.99 5324.48 1657.29 11486.56 00:23:06.194 ======================================================== 00:23:06.194 Total : 24060.56 93.99 5324.48 1657.29 11486.56 00:23:06.194 00:23:06.194 [2024-07-22 23:03:42.228920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:06.194 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:23:06.194 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.194 [2024-07-22 23:03:42.497388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:11.461 Initializing NVMe Controllers 00:23:11.461 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:11.461 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:23:11.461 Initialization complete. Launching workers. 00:23:11.461 ======================================================== 00:23:11.461 Latency(us) 00:23:11.461 Device Information : IOPS MiB/s Average min max 00:23:11.462 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.21 62.65 7979.50 5968.46 9977.82 00:23:11.462 ======================================================== 00:23:11.462 Total : 16039.21 62.65 7979.50 5968.46 9977.82 00:23:11.462 00:23:11.462 [2024-07-22 23:03:47.533780] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:11.462 23:03:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:23:11.462 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.722 [2024-07-22 23:03:47.786114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:17.034 [2024-07-22 23:03:52.856710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:17.035 Initializing NVMe Controllers 00:23:17.035 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:17.035 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:23:17.035 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:23:17.035 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:23:17.035 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:23:17.035 Initialization complete. Launching workers. 00:23:17.035 Starting thread on core 2 00:23:17.035 Starting thread on core 3 00:23:17.035 Starting thread on core 1 00:23:17.035 23:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:23:17.035 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.035 [2024-07-22 23:03:53.297893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:20.324 [2024-07-22 23:03:56.372509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:20.324 Initializing NVMe Controllers 00:23:20.324 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:20.324 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:20.324 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:23:20.324 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:23:20.324 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:23:20.324 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:23:20.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:23:20.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:23:20.324 Initialization complete. Launching workers. 00:23:20.324 Starting thread on core 1 with urgent priority queue 00:23:20.324 Starting thread on core 2 with urgent priority queue 00:23:20.324 Starting thread on core 3 with urgent priority queue 00:23:20.324 Starting thread on core 0 with urgent priority queue 00:23:20.324 SPDK bdev Controller (SPDK1 ) core 0: 4085.33 IO/s 24.48 secs/100000 ios 00:23:20.324 SPDK bdev Controller (SPDK1 ) core 1: 3995.33 IO/s 25.03 secs/100000 ios 00:23:20.324 SPDK bdev Controller (SPDK1 ) core 2: 4037.67 IO/s 24.77 secs/100000 ios 00:23:20.324 SPDK bdev Controller (SPDK1 ) core 3: 4105.00 IO/s 24.36 secs/100000 ios 00:23:20.324 ======================================================== 00:23:20.324 00:23:20.324 23:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:23:20.324 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.583 [2024-07-22 23:03:56.799979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:20.583 Initializing NVMe Controllers 00:23:20.583 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:20.583 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:20.583 Namespace ID: 1 size: 0GB 00:23:20.583 Initialization complete. 00:23:20.583 INFO: using host memory buffer for IO 00:23:20.583 Hello world! 00:23:20.583 [2024-07-22 23:03:56.834025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:20.842 23:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:23:20.842 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.100 [2024-07-22 23:03:57.221881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:22.039 Initializing NVMe Controllers 00:23:22.039 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:22.039 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:22.039 Initialization complete. Launching workers. 00:23:22.039 submit (in ns) avg, min, max = 9260.7, 5094.1, 4006391.1 00:23:22.039 complete (in ns) avg, min, max = 36997.8, 3003.0, 4006940.0 00:23:22.039 00:23:22.039 Submit histogram 00:23:22.039 ================ 00:23:22.039 Range in us Cumulative Count 00:23:22.039 5.073 - 5.096: 0.0103% ( 1) 00:23:22.039 5.096 - 5.120: 0.2373% ( 22) 00:23:22.039 5.120 - 5.144: 1.6918% ( 141) 00:23:22.039 5.144 - 5.167: 4.9412% ( 315) 00:23:22.039 5.167 - 5.191: 10.7077% ( 559) 00:23:22.039 5.191 - 5.215: 18.7745% ( 782) 00:23:22.039 5.215 - 5.239: 24.9948% ( 603) 00:23:22.039 5.239 - 5.262: 29.6782% ( 454) 00:23:22.039 5.262 - 5.286: 32.4015% ( 264) 00:23:22.039 5.286 - 5.310: 33.9385% ( 149) 00:23:22.039 5.310 - 5.333: 35.6406% ( 165) 00:23:22.039 5.333 - 5.357: 37.7450% ( 204) 00:23:22.039 5.357 - 5.381: 41.2936% ( 344) 00:23:22.039 5.381 - 5.404: 45.7603% ( 433) 00:23:22.039 5.404 - 5.428: 49.2470% ( 338) 00:23:22.039 5.428 - 5.452: 51.8568% ( 253) 00:23:22.039 5.452 - 5.476: 53.4764% ( 157) 00:23:22.039 5.476 - 5.499: 54.6111% ( 110) 00:23:22.039 5.499 - 5.523: 55.5498% ( 91) 00:23:22.039 5.523 - 5.547: 56.3029% ( 73) 00:23:22.039 5.547 - 5.570: 57.0869% ( 76) 00:23:22.039 5.570 - 5.594: 58.0462% ( 93) 00:23:22.039 5.594 - 5.618: 58.7374% ( 67) 00:23:22.039 5.618 - 5.641: 58.9953% ( 25) 00:23:22.039 5.641 - 5.665: 59.3047% ( 30) 00:23:22.039 5.665 - 5.689: 60.4394% ( 110) 00:23:22.039 5.689 - 5.713: 64.7927% ( 422) 00:23:22.039 5.713 - 5.736: 70.8995% ( 592) 00:23:22.039 5.736 - 5.760: 83.4124% ( 1213) 00:23:22.039 5.760 - 5.784: 92.6037% ( 891) 00:23:22.039 5.784 - 5.807: 95.4198% ( 273) 00:23:22.039 5.807 - 5.831: 96.0491% ( 61) 00:23:22.039 5.831 - 5.855: 96.4720% ( 41) 00:23:22.039 5.855 - 5.879: 96.6474% ( 17) 00:23:22.039 5.879 - 5.902: 96.7506% ( 10) 00:23:22.039 5.902 - 5.926: 96.8537% ( 10) 00:23:22.039 5.926 - 5.950: 96.9259% ( 7) 00:23:22.039 5.950 - 5.973: 97.0085% ( 8) 00:23:22.039 5.973 - 5.997: 97.1529% ( 14) 00:23:22.039 5.997 - 6.021: 97.4417% ( 28) 00:23:22.039 6.021 - 6.044: 97.7821% ( 33) 00:23:22.039 6.044 - 6.068: 97.9162% ( 13) 00:23:22.039 6.068 - 6.116: 98.0400% ( 12) 00:23:22.039 6.116 - 6.163: 98.1122% ( 7) 00:23:22.039 6.163 - 6.210: 98.2463% ( 13) 00:23:22.039 6.210 - 6.258: 98.3185% ( 7) 00:23:22.039 6.258 - 6.305: 98.3804% ( 6) 00:23:22.039 6.305 - 6.353: 98.4423% ( 6) 00:23:22.039 6.353 - 6.400: 98.5145% ( 7) 00:23:22.039 6.400 - 6.447: 98.5558% ( 4) 00:23:22.039 6.447 - 6.495: 98.6074% ( 5) 00:23:22.039 6.495 - 6.542: 98.6796% ( 7) 00:23:22.039 6.542 - 6.590: 98.7105% ( 3) 00:23:22.039 6.590 - 6.637: 98.7415% ( 3) 00:23:22.039 6.637 - 6.684: 98.7724% ( 3) 00:23:22.039 6.732 - 6.779: 98.8343% ( 6) 00:23:22.039 6.779 - 6.827: 98.8859% ( 5) 00:23:22.039 6.827 - 6.874: 98.9375% ( 5) 00:23:22.039 6.874 - 6.921: 98.9581% ( 2) 00:23:22.039 6.969 - 7.016: 98.9787% ( 2) 00:23:22.040 7.016 - 7.064: 98.9891% ( 1) 00:23:22.040 7.064 - 7.111: 98.9994% ( 1) 00:23:22.040 7.111 - 7.159: 99.0097% ( 1) 00:23:22.040 7.159 - 7.206: 99.0303% ( 2) 00:23:22.040 7.206 - 7.253: 99.0510% ( 2) 00:23:22.040 7.396 - 7.443: 99.0613% ( 1) 00:23:22.040 8.059 - 8.107: 99.0716% ( 1) 00:23:22.040 8.107 - 8.154: 99.0819% ( 1) 00:23:22.040 8.296 - 8.344: 99.1025% ( 2) 00:23:22.040 8.486 - 8.533: 99.1129% ( 1) 00:23:22.040 8.676 - 8.723: 99.1232% ( 1) 00:23:22.040 9.007 - 9.055: 99.1335% ( 1) 00:23:22.040 9.055 - 9.102: 99.1541% ( 2) 00:23:22.040 9.197 - 9.244: 99.1644% ( 1) 00:23:22.040 9.292 - 9.339: 99.1851% ( 2) 00:23:22.040 9.434 - 9.481: 99.1954% ( 1) 00:23:22.040 9.576 - 9.624: 99.2057% ( 1) 00:23:22.040 9.624 - 9.671: 99.2160% ( 1) 00:23:22.040 9.719 - 9.766: 99.2263% ( 1) 00:23:22.040 9.766 - 9.813: 99.2366% ( 1) 00:23:22.040 9.813 - 9.861: 99.2470% ( 1) 00:23:22.040 9.908 - 9.956: 99.2573% ( 1) 00:23:22.040 9.956 - 10.003: 99.2779% ( 2) 00:23:22.040 10.098 - 10.145: 99.3089% ( 3) 00:23:22.040 10.145 - 10.193: 99.3295% ( 2) 00:23:22.040 10.193 - 10.240: 99.3398% ( 1) 00:23:22.040 10.240 - 10.287: 99.3501% ( 1) 00:23:22.040 10.335 - 10.382: 99.3604% ( 1) 00:23:22.040 10.382 - 10.430: 99.3707% ( 1) 00:23:22.040 10.430 - 10.477: 99.4017% ( 3) 00:23:22.040 10.572 - 10.619: 99.4120% ( 1) 00:23:22.040 10.714 - 10.761: 99.4223% ( 1) 00:23:22.040 10.761 - 10.809: 99.4326% ( 1) 00:23:22.040 10.809 - 10.856: 99.4533% ( 2) 00:23:22.040 10.999 - 11.046: 99.4739% ( 2) 00:23:22.040 11.188 - 11.236: 99.4842% ( 1) 00:23:22.040 11.330 - 11.378: 99.5255% ( 4) 00:23:22.040 11.378 - 11.425: 99.5358% ( 1) 00:23:22.040 11.425 - 11.473: 99.5461% ( 1) 00:23:22.040 11.567 - 11.615: 99.5564% ( 1) 00:23:22.040 11.662 - 11.710: 99.5771% ( 2) 00:23:22.040 11.710 - 11.757: 99.5874% ( 1) 00:23:22.040 11.804 - 11.852: 99.5977% ( 1) 00:23:22.040 11.899 - 11.947: 99.6080% ( 1) 00:23:22.040 12.041 - 12.089: 99.6183% ( 1) 00:23:22.040 12.895 - 12.990: 99.6286% ( 1) 00:23:22.040 13.084 - 13.179: 99.6390% ( 1) 00:23:22.040 13.464 - 13.559: 99.6493% ( 1) 00:23:22.040 13.843 - 13.938: 99.6699% ( 2) 00:23:22.040 13.938 - 14.033: 99.6802% ( 1) 00:23:22.040 14.033 - 14.127: 99.6905% ( 1) 00:23:22.040 14.127 - 14.222: 99.7008% ( 1) 00:23:22.040 14.222 - 14.317: 99.7112% ( 1) 00:23:22.040 14.317 - 14.412: 99.7215% ( 1) 00:23:22.040 14.412 - 14.507: 99.7318% ( 1) 00:23:22.040 14.507 - 14.601: 99.7421% ( 1) 00:23:22.040 14.601 - 14.696: 99.7524% ( 1) 00:23:22.040 14.791 - 14.886: 99.7731% ( 2) 00:23:22.040 14.886 - 14.981: 99.7834% ( 1) 00:23:22.040 15.265 - 15.360: 99.7937% ( 1) 00:23:22.040 15.360 - 15.455: 99.8143% ( 2) 00:23:22.040 15.739 - 15.834: 99.8246% ( 1) 00:23:22.040 15.834 - 15.929: 99.8349% ( 1) 00:23:22.040 16.024 - 16.119: 99.8556% ( 2) 00:23:22.040 16.403 - 16.498: 99.8659% ( 1) 00:23:22.040 16.593 - 16.687: 99.8762% ( 1) 00:23:22.040 16.687 - 16.782: 99.8865% ( 1) 00:23:22.040 17.161 - 17.256: 99.8968% ( 1) 00:23:22.040 19.153 - 19.247: 99.9072% ( 1) 00:23:22.040 3980.705 - 4004.978: 99.9691% ( 6) 00:23:22.040 4004.978 - 4029.250: 100.0000% ( 3) 00:23:22.040 00:23:22.040 Complete histogram 00:23:22.040 ================== 00:23:22.040 Range in us Cumulative Count 00:23:22.040 2.999 - 3.010: 1.3823% ( 134) 00:23:22.040 3.010 - 3.022: 24.0561% ( 2198) 00:23:22.040 3.022 - 3.034: 46.3689% ( 2163) 00:23:22.040 3.034 - 3.058: 54.2294% ( 762) 00:23:22.040 3.058 - 3.081: 85.8469% ( 3065) 00:23:22.040 3.081 - 3.105: 93.5733% ( 749) 00:23:22.040 3.105 - 3.129: 97.7409% ( 404) 00:23:22.040 3.129 - 3.153: 98.2051% ( 45) 00:23:22.040 3.153 - 3.176: 98.3392% ( 13) 00:23:22.040 3.176 - 3.200: 98.3495% ( 1) 00:23:22.040 3.200 - 3.224: 98.3598% ( 1) 00:23:22.040 3.224 - 3.247: 98.3701% ( 1) 00:23:22.040 3.247 - 3.271: 98.4011% ( 3) 00:23:22.040 3.295 - 3.319: 98.4114% ( 1) 00:23:22.040 3.319 - 3.342: 98.4320% ( 2) 00:23:22.040 3.342 - 3.366: 98.4527% ( 2) 00:23:22.040 3.366 - 3.390: 98.4733% ( 2) 00:23:22.040 3.390 - 3.413: 98.5042% ( 3) 00:23:22.040 3.484 - 3.508: 98.5145% ( 1) 00:23:22.040 3.532 - 3.556: 98.5249% ( 1) 00:23:22.040 3.556 - 3.579: 98.5455% ( 2) 00:23:22.040 3.603 - 3.627: 98.5558% ( 1) 00:23:22.040 3.674 - 3.698: 98.5661% ( 1) 00:23:22.040 3.721 - 3.745: 98.5764% ( 1) 00:23:22.040 3.769 - 3.793: 98.5868% ( 1) 00:23:22.040 3.840 - 3.864: 98.5971% ( 1) 00:23:22.040 3.887 - 3.911: 98.6074% ( 1) 00:23:22.040 4.006 - 4.030: 98.6177% ( 1) 00:23:22.040 4.053 - 4.077: 98.6383% ( 2) 00:23:22.040 4.101 - 4.124: 98.6693% ( 3) 00:23:22.040 4.124 - 4.148: 98.6899% ( 2) 00:23:22.040 4.148 - 4.172: 98.7105% ( 2) 00:23:22.040 4.172 - 4.196: 98.7209% ( 1) 00:23:22.040 4.196 - 4.2[2024-07-22 23:03:58.248562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:22.040 19: 98.7415% ( 2) 00:23:22.040 4.243 - 4.267: 98.7621% ( 2) 00:23:22.040 4.267 - 4.290: 98.7724% ( 1) 00:23:22.040 4.314 - 4.338: 98.7931% ( 2) 00:23:22.040 4.338 - 4.361: 98.8343% ( 4) 00:23:22.040 4.361 - 4.385: 98.8550% ( 2) 00:23:22.040 4.385 - 4.409: 98.8653% ( 1) 00:23:22.040 4.409 - 4.433: 98.8859% ( 2) 00:23:22.040 4.433 - 4.456: 98.9065% ( 2) 00:23:22.040 4.504 - 4.527: 98.9169% ( 1) 00:23:22.040 4.670 - 4.693: 98.9272% ( 1) 00:23:22.040 5.357 - 5.381: 98.9375% ( 1) 00:23:22.040 5.973 - 5.997: 98.9478% ( 1) 00:23:22.040 7.064 - 7.111: 98.9581% ( 1) 00:23:22.040 7.206 - 7.253: 98.9787% ( 2) 00:23:22.040 7.443 - 7.490: 98.9891% ( 1) 00:23:22.040 7.490 - 7.538: 98.9994% ( 1) 00:23:22.040 8.012 - 8.059: 99.0200% ( 2) 00:23:22.040 8.201 - 8.249: 99.0303% ( 1) 00:23:22.040 8.249 - 8.296: 99.0510% ( 2) 00:23:22.040 8.296 - 8.344: 99.0613% ( 1) 00:23:22.040 8.960 - 9.007: 99.0819% ( 2) 00:23:22.040 9.150 - 9.197: 99.0922% ( 1) 00:23:22.040 9.576 - 9.624: 99.1129% ( 2) 00:23:22.040 12.326 - 12.421: 99.1232% ( 1) 00:23:22.040 16.119 - 16.213: 99.1335% ( 1) 00:23:22.040 16.308 - 16.403: 99.1438% ( 1) 00:23:22.040 1007.313 - 1013.381: 99.1541% ( 1) 00:23:22.040 3980.705 - 4004.978: 99.9794% ( 80) 00:23:22.040 4004.978 - 4029.250: 100.0000% ( 2) 00:23:22.040 00:23:22.040 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:23:22.040 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:23:22.040 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:23:22.040 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:23:22.040 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:23:22.611 [ 00:23:22.611 { 00:23:22.611 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:22.611 "subtype": "Discovery", 00:23:22.611 "listen_addresses": [], 00:23:22.611 "allow_any_host": true, 00:23:22.611 "hosts": [] 00:23:22.611 }, 00:23:22.611 { 00:23:22.611 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:23:22.611 "subtype": "NVMe", 00:23:22.611 "listen_addresses": [ 00:23:22.611 { 00:23:22.611 "trtype": "VFIOUSER", 00:23:22.611 "adrfam": "IPv4", 00:23:22.611 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:23:22.611 "trsvcid": "0" 00:23:22.611 } 00:23:22.611 ], 00:23:22.611 "allow_any_host": true, 00:23:22.611 "hosts": [], 00:23:22.611 "serial_number": "SPDK1", 00:23:22.611 "model_number": "SPDK bdev Controller", 00:23:22.611 "max_namespaces": 32, 00:23:22.611 "min_cntlid": 1, 00:23:22.611 "max_cntlid": 65519, 00:23:22.611 "namespaces": [ 00:23:22.611 { 00:23:22.611 "nsid": 1, 00:23:22.611 "bdev_name": "Malloc1", 00:23:22.611 "name": "Malloc1", 00:23:22.611 "nguid": "ED50B53EE47144C3989D9A964E82761F", 00:23:22.611 "uuid": "ed50b53e-e471-44c3-989d-9a964e82761f" 00:23:22.611 } 00:23:22.611 ] 00:23:22.611 }, 00:23:22.611 { 00:23:22.611 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:23:22.611 "subtype": "NVMe", 00:23:22.611 "listen_addresses": [ 00:23:22.611 { 00:23:22.611 "trtype": "VFIOUSER", 00:23:22.611 "adrfam": "IPv4", 00:23:22.611 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:23:22.611 "trsvcid": "0" 00:23:22.611 } 00:23:22.611 ], 00:23:22.611 "allow_any_host": true, 00:23:22.611 "hosts": [], 00:23:22.611 "serial_number": "SPDK2", 00:23:22.611 "model_number": "SPDK bdev Controller", 00:23:22.611 "max_namespaces": 32, 00:23:22.611 "min_cntlid": 1, 00:23:22.611 "max_cntlid": 65519, 00:23:22.611 "namespaces": [ 00:23:22.611 { 00:23:22.611 "nsid": 1, 00:23:22.611 "bdev_name": "Malloc2", 00:23:22.611 "name": "Malloc2", 00:23:22.611 "nguid": "A7160DFEF1F04E09877DA1F606A7E372", 00:23:22.611 "uuid": "a7160dfe-f1f0-4e09-877d-a1f606a7e372" 00:23:22.611 } 00:23:22.611 ] 00:23:22.611 } 00:23:22.611 ] 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=887191 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:23:22.871 23:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:23:22.871 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.871 [2024-07-22 23:03:59.176925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:23:23.441 Malloc3 00:23:23.441 23:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:23:24.011 [2024-07-22 23:04:00.187587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:23:24.011 23:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:23:24.011 Asynchronous Event Request test 00:23:24.011 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:23:24.011 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:23:24.011 Registering asynchronous event callbacks... 00:23:24.011 Starting namespace attribute notice tests for all controllers... 00:23:24.011 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:24.011 aer_cb - Changed Namespace 00:23:24.011 Cleaning up... 00:23:24.580 [ 00:23:24.580 { 00:23:24.580 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:24.580 "subtype": "Discovery", 00:23:24.580 "listen_addresses": [], 00:23:24.580 "allow_any_host": true, 00:23:24.580 "hosts": [] 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:23:24.580 "subtype": "NVMe", 00:23:24.580 "listen_addresses": [ 00:23:24.580 { 00:23:24.580 "trtype": "VFIOUSER", 00:23:24.580 "adrfam": "IPv4", 00:23:24.580 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:23:24.580 "trsvcid": "0" 00:23:24.580 } 00:23:24.580 ], 00:23:24.580 "allow_any_host": true, 00:23:24.580 "hosts": [], 00:23:24.580 "serial_number": "SPDK1", 00:23:24.580 "model_number": "SPDK bdev Controller", 00:23:24.580 "max_namespaces": 32, 00:23:24.580 "min_cntlid": 1, 00:23:24.580 "max_cntlid": 65519, 00:23:24.580 "namespaces": [ 00:23:24.580 { 00:23:24.580 "nsid": 1, 00:23:24.580 "bdev_name": "Malloc1", 00:23:24.580 "name": "Malloc1", 00:23:24.580 "nguid": "ED50B53EE47144C3989D9A964E82761F", 00:23:24.580 "uuid": "ed50b53e-e471-44c3-989d-9a964e82761f" 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "nsid": 2, 00:23:24.580 "bdev_name": "Malloc3", 00:23:24.580 "name": "Malloc3", 00:23:24.580 "nguid": "AACB462E59AD49EDAC86182B6D4361AA", 00:23:24.580 "uuid": "aacb462e-59ad-49ed-ac86-182b6d4361aa" 00:23:24.580 } 00:23:24.580 ] 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:23:24.580 "subtype": "NVMe", 00:23:24.580 "listen_addresses": [ 00:23:24.580 { 00:23:24.580 "trtype": "VFIOUSER", 00:23:24.580 "adrfam": "IPv4", 00:23:24.580 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:23:24.580 "trsvcid": "0" 00:23:24.580 } 00:23:24.580 ], 00:23:24.580 "allow_any_host": true, 00:23:24.580 "hosts": [], 00:23:24.580 "serial_number": "SPDK2", 00:23:24.580 "model_number": "SPDK bdev Controller", 00:23:24.580 "max_namespaces": 32, 00:23:24.580 "min_cntlid": 1, 00:23:24.580 "max_cntlid": 65519, 00:23:24.580 "namespaces": [ 00:23:24.580 { 00:23:24.580 "nsid": 1, 00:23:24.580 "bdev_name": "Malloc2", 00:23:24.580 "name": "Malloc2", 00:23:24.580 "nguid": "A7160DFEF1F04E09877DA1F606A7E372", 00:23:24.580 "uuid": "a7160dfe-f1f0-4e09-877d-a1f606a7e372" 00:23:24.580 } 00:23:24.580 ] 00:23:24.580 } 00:23:24.580 ] 00:23:24.580 23:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 887191 00:23:24.580 23:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:24.580 23:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:23:24.580 23:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:23:24.580 23:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:23:24.580 [2024-07-22 23:04:00.807253] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:23:24.580 [2024-07-22 23:04:00.807379] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887445 ] 00:23:24.580 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.580 [2024-07-22 23:04:00.866302] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:23:24.580 [2024-07-22 23:04:00.874678] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:24.580 [2024-07-22 23:04:00.874718] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9bb3239000 00:23:24.580 [2024-07-22 23:04:00.875675] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.876688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.877692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.878699] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.879705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.880718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.881727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.882737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:23:24.580 [2024-07-22 23:04:00.883749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:23:24.580 [2024-07-22 23:04:00.883780] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9bb1fed000 00:23:24.581 [2024-07-22 23:04:00.885353] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:24.842 [2024-07-22 23:04:00.908952] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:23:24.842 [2024-07-22 23:04:00.908997] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:23:24.842 [2024-07-22 23:04:00.911126] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:23:24.842 [2024-07-22 23:04:00.911199] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:23:24.842 [2024-07-22 23:04:00.911336] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:23:24.842 [2024-07-22 23:04:00.911372] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:23:24.842 [2024-07-22 23:04:00.911386] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:23:24.842 [2024-07-22 23:04:00.912129] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:23:24.842 [2024-07-22 23:04:00.912163] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:23:24.842 [2024-07-22 23:04:00.912181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:23:24.842 [2024-07-22 23:04:00.913137] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:23:24.842 [2024-07-22 23:04:00.913174] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:23:24.842 [2024-07-22 23:04:00.913194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:23:24.842 [2024-07-22 23:04:00.914143] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:23:24.842 [2024-07-22 23:04:00.914170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:24.842 [2024-07-22 23:04:00.915155] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:23:24.842 [2024-07-22 23:04:00.915182] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:23:24.842 [2024-07-22 23:04:00.915195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:23:24.842 [2024-07-22 23:04:00.915210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:24.842 [2024-07-22 23:04:00.915323] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:23:24.842 [2024-07-22 23:04:00.915336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:24.842 [2024-07-22 23:04:00.915353] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:23:24.842 [2024-07-22 23:04:00.916161] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:23:24.843 [2024-07-22 23:04:00.917170] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:23:24.843 [2024-07-22 23:04:00.920326] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:23:24.843 [2024-07-22 23:04:00.921202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:24.843 [2024-07-22 23:04:00.921292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:24.843 [2024-07-22 23:04:00.922223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:23:24.843 [2024-07-22 23:04:00.922249] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:24.843 [2024-07-22 23:04:00.922262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.922295] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:23:24.843 [2024-07-22 23:04:00.922322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.922355] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:24.843 [2024-07-22 23:04:00.922368] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:24.843 [2024-07-22 23:04:00.922377] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:24.843 [2024-07-22 23:04:00.922400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.927325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.927361] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:23:24.843 [2024-07-22 23:04:00.927375] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:23:24.843 [2024-07-22 23:04:00.927385] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:23:24.843 [2024-07-22 23:04:00.927396] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:23:24.843 [2024-07-22 23:04:00.927407] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:23:24.843 [2024-07-22 23:04:00.927417] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:23:24.843 [2024-07-22 23:04:00.927428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.927447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.927468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.935326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.935371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.843 [2024-07-22 23:04:00.935392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.843 [2024-07-22 23:04:00.935409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.843 [2024-07-22 23:04:00.935426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.843 [2024-07-22 23:04:00.935438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.935459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.935480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.943327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.943351] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:23:24.843 [2024-07-22 23:04:00.943364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.943379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.943392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.943411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.951322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.951427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.951450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.951469] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:23:24.843 [2024-07-22 23:04:00.951480] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:23:24.843 [2024-07-22 23:04:00.951489] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:24.843 [2024-07-22 23:04:00.951503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.959323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.959362] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:23:24.843 [2024-07-22 23:04:00.959384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.959404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.959422] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:24.843 [2024-07-22 23:04:00.959439] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:24.843 [2024-07-22 23:04:00.959448] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:24.843 [2024-07-22 23:04:00.959462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.967326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.967365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.967386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.967404] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:23:24.843 [2024-07-22 23:04:00.967416] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:24.843 [2024-07-22 23:04:00.967424] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:24.843 [2024-07-22 23:04:00.967437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.975327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.975356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.975373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.975399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.975415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.975426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.975437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.975449] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:23:24.843 [2024-07-22 23:04:00.975459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:23:24.843 [2024-07-22 23:04:00.975471] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:23:24.843 [2024-07-22 23:04:00.975505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.983323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.983362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.991358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:23:24.843 [2024-07-22 23:04:00.999325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:23:24.843 [2024-07-22 23:04:00.999366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:24.844 [2024-07-22 23:04:01.007321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:23:24.844 [2024-07-22 23:04:01.007379] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:23:24.844 [2024-07-22 23:04:01.007395] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:23:24.844 [2024-07-22 23:04:01.007404] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:23:24.844 [2024-07-22 23:04:01.007412] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:23:24.844 [2024-07-22 23:04:01.007420] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:23:24.844 [2024-07-22 23:04:01.007434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:23:24.844 [2024-07-22 23:04:01.007450] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:23:24.844 [2024-07-22 23:04:01.007461] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:23:24.844 [2024-07-22 23:04:01.007470] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:24.844 [2024-07-22 23:04:01.007482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:23:24.844 [2024-07-22 23:04:01.007497] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:23:24.844 [2024-07-22 23:04:01.007508] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:23:24.844 [2024-07-22 23:04:01.007516] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:24.844 [2024-07-22 23:04:01.007528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:23:24.844 [2024-07-22 23:04:01.007545] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:23:24.844 [2024-07-22 23:04:01.007556] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:23:24.844 [2024-07-22 23:04:01.007564] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:23:24.844 [2024-07-22 23:04:01.007576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:23:24.844 [2024-07-22 23:04:01.015328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:23:24.844 [2024-07-22 23:04:01.015368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:23:24.844 [2024-07-22 23:04:01.015394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:23:24.844 [2024-07-22 23:04:01.015410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:23:24.844 ===================================================== 00:23:24.844 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:23:24.844 ===================================================== 00:23:24.844 Controller Capabilities/Features 00:23:24.844 ================================ 00:23:24.844 Vendor ID: 4e58 00:23:24.844 Subsystem Vendor ID: 4e58 00:23:24.844 Serial Number: SPDK2 00:23:24.844 Model Number: SPDK bdev Controller 00:23:24.844 Firmware Version: 24.09 00:23:24.844 Recommended Arb Burst: 6 00:23:24.844 IEEE OUI Identifier: 8d 6b 50 00:23:24.844 Multi-path I/O 00:23:24.844 May have multiple subsystem ports: Yes 00:23:24.844 May have multiple controllers: Yes 00:23:24.844 Associated with SR-IOV VF: No 00:23:24.844 Max Data Transfer Size: 131072 00:23:24.844 Max Number of Namespaces: 32 00:23:24.844 Max Number of I/O Queues: 127 00:23:24.844 NVMe Specification Version (VS): 1.3 00:23:24.844 NVMe Specification Version (Identify): 1.3 00:23:24.844 Maximum Queue Entries: 256 00:23:24.844 Contiguous Queues Required: Yes 00:23:24.844 Arbitration Mechanisms Supported 00:23:24.844 Weighted Round Robin: Not Supported 00:23:24.844 Vendor Specific: Not Supported 00:23:24.844 Reset Timeout: 15000 ms 00:23:24.844 Doorbell Stride: 4 bytes 00:23:24.844 NVM Subsystem Reset: Not Supported 00:23:24.844 Command Sets Supported 00:23:24.844 NVM Command Set: Supported 00:23:24.844 Boot Partition: Not Supported 00:23:24.844 Memory Page Size Minimum: 4096 bytes 00:23:24.844 Memory Page Size Maximum: 4096 bytes 00:23:24.844 Persistent Memory Region: Not Supported 00:23:24.844 Optional Asynchronous Events Supported 00:23:24.844 Namespace Attribute Notices: Supported 00:23:24.844 Firmware Activation Notices: Not Supported 00:23:24.844 ANA Change Notices: Not Supported 00:23:24.844 PLE Aggregate Log Change Notices: Not Supported 00:23:24.844 LBA Status Info Alert Notices: Not Supported 00:23:24.844 EGE Aggregate Log Change Notices: Not Supported 00:23:24.844 Normal NVM Subsystem Shutdown event: Not Supported 00:23:24.844 Zone Descriptor Change Notices: Not Supported 00:23:24.844 Discovery Log Change Notices: Not Supported 00:23:24.844 Controller Attributes 00:23:24.844 128-bit Host Identifier: Supported 00:23:24.844 Non-Operational Permissive Mode: Not Supported 00:23:24.844 NVM Sets: Not Supported 00:23:24.844 Read Recovery Levels: Not Supported 00:23:24.844 Endurance Groups: Not Supported 00:23:24.844 Predictable Latency Mode: Not Supported 00:23:24.844 Traffic Based Keep ALive: Not Supported 00:23:24.844 Namespace Granularity: Not Supported 00:23:24.844 SQ Associations: Not Supported 00:23:24.844 UUID List: Not Supported 00:23:24.844 Multi-Domain Subsystem: Not Supported 00:23:24.844 Fixed Capacity Management: Not Supported 00:23:24.844 Variable Capacity Management: Not Supported 00:23:24.844 Delete Endurance Group: Not Supported 00:23:24.844 Delete NVM Set: Not Supported 00:23:24.844 Extended LBA Formats Supported: Not Supported 00:23:24.844 Flexible Data Placement Supported: Not Supported 00:23:24.844 00:23:24.844 Controller Memory Buffer Support 00:23:24.844 ================================ 00:23:24.844 Supported: No 00:23:24.844 00:23:24.844 Persistent Memory Region Support 00:23:24.844 ================================ 00:23:24.844 Supported: No 00:23:24.844 00:23:24.844 Admin Command Set Attributes 00:23:24.844 ============================ 00:23:24.844 Security Send/Receive: Not Supported 00:23:24.844 Format NVM: Not Supported 00:23:24.844 Firmware Activate/Download: Not Supported 00:23:24.844 Namespace Management: Not Supported 00:23:24.844 Device Self-Test: Not Supported 00:23:24.844 Directives: Not Supported 00:23:24.844 NVMe-MI: Not Supported 00:23:24.844 Virtualization Management: Not Supported 00:23:24.844 Doorbell Buffer Config: Not Supported 00:23:24.844 Get LBA Status Capability: Not Supported 00:23:24.844 Command & Feature Lockdown Capability: Not Supported 00:23:24.844 Abort Command Limit: 4 00:23:24.844 Async Event Request Limit: 4 00:23:24.844 Number of Firmware Slots: N/A 00:23:24.844 Firmware Slot 1 Read-Only: N/A 00:23:24.844 Firmware Activation Without Reset: N/A 00:23:24.844 Multiple Update Detection Support: N/A 00:23:24.844 Firmware Update Granularity: No Information Provided 00:23:24.844 Per-Namespace SMART Log: No 00:23:24.844 Asymmetric Namespace Access Log Page: Not Supported 00:23:24.844 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:23:24.844 Command Effects Log Page: Supported 00:23:24.844 Get Log Page Extended Data: Supported 00:23:24.844 Telemetry Log Pages: Not Supported 00:23:24.844 Persistent Event Log Pages: Not Supported 00:23:24.844 Supported Log Pages Log Page: May Support 00:23:24.844 Commands Supported & Effects Log Page: Not Supported 00:23:24.844 Feature Identifiers & Effects Log Page:May Support 00:23:24.844 NVMe-MI Commands & Effects Log Page: May Support 00:23:24.844 Data Area 4 for Telemetry Log: Not Supported 00:23:24.844 Error Log Page Entries Supported: 128 00:23:24.844 Keep Alive: Supported 00:23:24.844 Keep Alive Granularity: 10000 ms 00:23:24.844 00:23:24.844 NVM Command Set Attributes 00:23:24.844 ========================== 00:23:24.844 Submission Queue Entry Size 00:23:24.844 Max: 64 00:23:24.844 Min: 64 00:23:24.844 Completion Queue Entry Size 00:23:24.844 Max: 16 00:23:24.844 Min: 16 00:23:24.844 Number of Namespaces: 32 00:23:24.844 Compare Command: Supported 00:23:24.844 Write Uncorrectable Command: Not Supported 00:23:24.844 Dataset Management Command: Supported 00:23:24.844 Write Zeroes Command: Supported 00:23:24.844 Set Features Save Field: Not Supported 00:23:24.844 Reservations: Not Supported 00:23:24.844 Timestamp: Not Supported 00:23:24.844 Copy: Supported 00:23:24.844 Volatile Write Cache: Present 00:23:24.844 Atomic Write Unit (Normal): 1 00:23:24.844 Atomic Write Unit (PFail): 1 00:23:24.844 Atomic Compare & Write Unit: 1 00:23:24.844 Fused Compare & Write: Supported 00:23:24.844 Scatter-Gather List 00:23:24.844 SGL Command Set: Supported (Dword aligned) 00:23:24.844 SGL Keyed: Not Supported 00:23:24.844 SGL Bit Bucket Descriptor: Not Supported 00:23:24.844 SGL Metadata Pointer: Not Supported 00:23:24.844 Oversized SGL: Not Supported 00:23:24.844 SGL Metadata Address: Not Supported 00:23:24.844 SGL Offset: Not Supported 00:23:24.845 Transport SGL Data Block: Not Supported 00:23:24.845 Replay Protected Memory Block: Not Supported 00:23:24.845 00:23:24.845 Firmware Slot Information 00:23:24.845 ========================= 00:23:24.845 Active slot: 1 00:23:24.845 Slot 1 Firmware Revision: 24.09 00:23:24.845 00:23:24.845 00:23:24.845 Commands Supported and Effects 00:23:24.845 ============================== 00:23:24.845 Admin Commands 00:23:24.845 -------------- 00:23:24.845 Get Log Page (02h): Supported 00:23:24.845 Identify (06h): Supported 00:23:24.845 Abort (08h): Supported 00:23:24.845 Set Features (09h): Supported 00:23:24.845 Get Features (0Ah): Supported 00:23:24.845 Asynchronous Event Request (0Ch): Supported 00:23:24.845 Keep Alive (18h): Supported 00:23:24.845 I/O Commands 00:23:24.845 ------------ 00:23:24.845 Flush (00h): Supported LBA-Change 00:23:24.845 Write (01h): Supported LBA-Change 00:23:24.845 Read (02h): Supported 00:23:24.845 Compare (05h): Supported 00:23:24.845 Write Zeroes (08h): Supported LBA-Change 00:23:24.845 Dataset Management (09h): Supported LBA-Change 00:23:24.845 Copy (19h): Supported LBA-Change 00:23:24.845 00:23:24.845 Error Log 00:23:24.845 ========= 00:23:24.845 00:23:24.845 Arbitration 00:23:24.845 =========== 00:23:24.845 Arbitration Burst: 1 00:23:24.845 00:23:24.845 Power Management 00:23:24.845 ================ 00:23:24.845 Number of Power States: 1 00:23:24.845 Current Power State: Power State #0 00:23:24.845 Power State #0: 00:23:24.845 Max Power: 0.00 W 00:23:24.845 Non-Operational State: Operational 00:23:24.845 Entry Latency: Not Reported 00:23:24.845 Exit Latency: Not Reported 00:23:24.845 Relative Read Throughput: 0 00:23:24.845 Relative Read Latency: 0 00:23:24.845 Relative Write Throughput: 0 00:23:24.845 Relative Write Latency: 0 00:23:24.845 Idle Power: Not Reported 00:23:24.845 Active Power: Not Reported 00:23:24.845 Non-Operational Permissive Mode: Not Supported 00:23:24.845 00:23:24.845 Health Information 00:23:24.845 ================== 00:23:24.845 Critical Warnings: 00:23:24.845 Available Spare Space: OK 00:23:24.845 Temperature: OK 00:23:24.845 Device Reliability: OK 00:23:24.845 Read Only: No 00:23:24.845 Volatile Memory Backup: OK 00:23:24.845 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:24.845 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:24.845 Available Spare: 0% 00:23:24.845 Available Sp[2024-07-22 23:04:01.015579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:23:24.845 [2024-07-22 23:04:01.023325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:23:24.845 [2024-07-22 23:04:01.023398] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:23:24.845 [2024-07-22 23:04:01.023423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.845 [2024-07-22 23:04:01.023438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.845 [2024-07-22 23:04:01.023458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.845 [2024-07-22 23:04:01.023472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.845 [2024-07-22 23:04:01.023563] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:23:24.845 [2024-07-22 23:04:01.023591] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:23:24.845 [2024-07-22 23:04:01.024571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:24.845 [2024-07-22 23:04:01.024667] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:23:24.845 [2024-07-22 23:04:01.024689] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:23:24.845 [2024-07-22 23:04:01.025571] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:23:24.845 [2024-07-22 23:04:01.025604] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:23:24.845 [2024-07-22 23:04:01.025675] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:23:24.845 [2024-07-22 23:04:01.029324] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:23:24.845 are Threshold: 0% 00:23:24.845 Life Percentage Used: 0% 00:23:24.845 Data Units Read: 0 00:23:24.845 Data Units Written: 0 00:23:24.845 Host Read Commands: 0 00:23:24.845 Host Write Commands: 0 00:23:24.845 Controller Busy Time: 0 minutes 00:23:24.845 Power Cycles: 0 00:23:24.845 Power On Hours: 0 hours 00:23:24.845 Unsafe Shutdowns: 0 00:23:24.845 Unrecoverable Media Errors: 0 00:23:24.845 Lifetime Error Log Entries: 0 00:23:24.845 Warning Temperature Time: 0 minutes 00:23:24.845 Critical Temperature Time: 0 minutes 00:23:24.845 00:23:24.845 Number of Queues 00:23:24.845 ================ 00:23:24.845 Number of I/O Submission Queues: 127 00:23:24.845 Number of I/O Completion Queues: 127 00:23:24.845 00:23:24.845 Active Namespaces 00:23:24.845 ================= 00:23:24.845 Namespace ID:1 00:23:24.845 Error Recovery Timeout: Unlimited 00:23:24.845 Command Set Identifier: NVM (00h) 00:23:24.845 Deallocate: Supported 00:23:24.845 Deallocated/Unwritten Error: Not Supported 00:23:24.845 Deallocated Read Value: Unknown 00:23:24.845 Deallocate in Write Zeroes: Not Supported 00:23:24.845 Deallocated Guard Field: 0xFFFF 00:23:24.845 Flush: Supported 00:23:24.845 Reservation: Supported 00:23:24.845 Namespace Sharing Capabilities: Multiple Controllers 00:23:24.845 Size (in LBAs): 131072 (0GiB) 00:23:24.845 Capacity (in LBAs): 131072 (0GiB) 00:23:24.845 Utilization (in LBAs): 131072 (0GiB) 00:23:24.845 NGUID: A7160DFEF1F04E09877DA1F606A7E372 00:23:24.845 UUID: a7160dfe-f1f0-4e09-877d-a1f606a7e372 00:23:24.845 Thin Provisioning: Not Supported 00:23:24.845 Per-NS Atomic Units: Yes 00:23:24.845 Atomic Boundary Size (Normal): 0 00:23:24.845 Atomic Boundary Size (PFail): 0 00:23:24.845 Atomic Boundary Offset: 0 00:23:24.845 Maximum Single Source Range Length: 65535 00:23:24.845 Maximum Copy Length: 65535 00:23:24.845 Maximum Source Range Count: 1 00:23:24.845 NGUID/EUI64 Never Reused: No 00:23:24.845 Namespace Write Protected: No 00:23:24.845 Number of LBA Formats: 1 00:23:24.845 Current LBA Format: LBA Format #00 00:23:24.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:24.845 00:23:24.845 23:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:23:24.845 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.106 [2024-07-22 23:04:01.362766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:30.388 Initializing NVMe Controllers 00:23:30.388 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:23:30.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:23:30.388 Initialization complete. Launching workers. 00:23:30.388 ======================================================== 00:23:30.388 Latency(us) 00:23:30.388 Device Information : IOPS MiB/s Average min max 00:23:30.388 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24095.72 94.12 5312.06 1649.38 8219.98 00:23:30.388 ======================================================== 00:23:30.388 Total : 24095.72 94.12 5312.06 1649.38 8219.98 00:23:30.388 00:23:30.388 [2024-07-22 23:04:06.467755] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:30.388 23:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:23:30.388 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.648 [2024-07-22 23:04:06.772689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:35.928 Initializing NVMe Controllers 00:23:35.928 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:23:35.928 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:23:35.928 Initialization complete. Launching workers. 00:23:35.928 ======================================================== 00:23:35.928 Latency(us) 00:23:35.928 Device Information : IOPS MiB/s Average min max 00:23:35.928 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24076.40 94.05 5318.48 1690.29 9593.89 00:23:35.928 ======================================================== 00:23:35.928 Total : 24076.40 94.05 5318.48 1690.29 9593.89 00:23:35.928 00:23:35.928 [2024-07-22 23:04:11.800068] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:35.928 23:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:23:35.928 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.928 [2024-07-22 23:04:12.119903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:41.207 [2024-07-22 23:04:17.254473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:41.207 Initializing NVMe Controllers 00:23:41.207 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:23:41.207 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:23:41.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:23:41.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:23:41.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:23:41.207 Initialization complete. Launching workers. 00:23:41.207 Starting thread on core 2 00:23:41.207 Starting thread on core 3 00:23:41.207 Starting thread on core 1 00:23:41.207 23:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:23:41.207 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.467 [2024-07-22 23:04:17.691666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:44.785 [2024-07-22 23:04:20.912121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:44.785 Initializing NVMe Controllers 00:23:44.785 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:23:44.785 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:23:44.785 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:23:44.785 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:23:44.785 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:23:44.785 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:23:44.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:23:44.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:23:44.785 Initialization complete. Launching workers. 00:23:44.785 Starting thread on core 1 with urgent priority queue 00:23:44.785 Starting thread on core 2 with urgent priority queue 00:23:44.785 Starting thread on core 3 with urgent priority queue 00:23:44.785 Starting thread on core 0 with urgent priority queue 00:23:44.785 SPDK bdev Controller (SPDK2 ) core 0: 1139.33 IO/s 87.77 secs/100000 ios 00:23:44.785 SPDK bdev Controller (SPDK2 ) core 1: 1173.33 IO/s 85.23 secs/100000 ios 00:23:44.785 SPDK bdev Controller (SPDK2 ) core 2: 1228.67 IO/s 81.39 secs/100000 ios 00:23:44.785 SPDK bdev Controller (SPDK2 ) core 3: 899.00 IO/s 111.23 secs/100000 ios 00:23:44.785 ======================================================== 00:23:44.785 00:23:44.785 23:04:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:23:44.785 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.045 [2024-07-22 23:04:21.319110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:45.045 Initializing NVMe Controllers 00:23:45.045 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:23:45.045 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:23:45.045 Namespace ID: 1 size: 0GB 00:23:45.045 Initialization complete. 00:23:45.045 INFO: using host memory buffer for IO 00:23:45.045 Hello world! 00:23:45.045 [2024-07-22 23:04:21.330348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:45.306 23:04:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:23:45.306 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.565 [2024-07-22 23:04:21.749565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:46.948 Initializing NVMe Controllers 00:23:46.948 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:23:46.948 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:23:46.948 Initialization complete. Launching workers. 00:23:46.948 submit (in ns) avg, min, max = 12111.6, 5104.4, 4010140.0 00:23:46.948 complete (in ns) avg, min, max = 34306.8, 3013.3, 5998271.9 00:23:46.948 00:23:46.948 Submit histogram 00:23:46.948 ================ 00:23:46.948 Range in us Cumulative Count 00:23:46.948 5.096 - 5.120: 0.0514% ( 5) 00:23:46.948 5.120 - 5.144: 0.3697% ( 31) 00:23:46.948 5.144 - 5.167: 1.8692% ( 146) 00:23:46.948 5.167 - 5.191: 5.1761% ( 322) 00:23:46.948 5.191 - 5.215: 11.0917% ( 576) 00:23:46.948 5.215 - 5.239: 18.7327% ( 744) 00:23:46.948 5.239 - 5.262: 27.0104% ( 806) 00:23:46.948 5.262 - 5.286: 31.8168% ( 468) 00:23:46.948 5.286 - 5.310: 33.8708% ( 200) 00:23:46.948 5.310 - 5.333: 35.5551% ( 164) 00:23:46.948 5.333 - 5.357: 37.2907% ( 169) 00:23:46.948 5.357 - 5.381: 40.4847% ( 311) 00:23:46.948 5.381 - 5.404: 44.3874% ( 380) 00:23:46.948 5.404 - 5.428: 48.7830% ( 428) 00:23:46.948 5.428 - 5.452: 52.3364% ( 346) 00:23:46.948 5.452 - 5.476: 54.5445% ( 215) 00:23:46.948 5.476 - 5.499: 56.2904% ( 170) 00:23:46.948 5.499 - 5.523: 57.7283% ( 140) 00:23:46.948 5.523 - 5.547: 58.9709% ( 121) 00:23:46.948 5.547 - 5.570: 59.8850% ( 89) 00:23:46.948 5.570 - 5.594: 60.9325% ( 102) 00:23:46.948 5.594 - 5.618: 61.5487% ( 60) 00:23:46.948 5.618 - 5.641: 61.8671% ( 31) 00:23:46.948 5.641 - 5.665: 62.3806% ( 50) 00:23:46.948 5.665 - 5.689: 64.4757% ( 204) 00:23:46.948 5.689 - 5.713: 70.0626% ( 544) 00:23:46.948 5.713 - 5.736: 76.6766% ( 644) 00:23:46.948 5.736 - 5.760: 89.1856% ( 1218) 00:23:46.948 5.760 - 5.784: 94.5260% ( 520) 00:23:46.948 5.784 - 5.807: 96.3747% ( 180) 00:23:46.948 5.807 - 5.831: 96.8984% ( 51) 00:23:46.948 5.831 - 5.855: 97.0319% ( 13) 00:23:46.948 5.855 - 5.879: 97.1038% ( 7) 00:23:46.948 5.879 - 5.902: 97.2168% ( 11) 00:23:46.948 5.902 - 5.926: 97.2476% ( 3) 00:23:46.948 5.926 - 5.950: 97.2887% ( 4) 00:23:46.948 5.950 - 5.973: 97.3811% ( 9) 00:23:46.948 5.973 - 5.997: 97.5146% ( 13) 00:23:46.948 5.997 - 6.021: 97.6173% ( 10) 00:23:46.948 6.021 - 6.044: 97.6790% ( 6) 00:23:46.948 6.044 - 6.068: 97.7303% ( 5) 00:23:46.948 6.068 - 6.116: 97.7817% ( 5) 00:23:46.948 6.116 - 6.163: 97.8227% ( 4) 00:23:46.948 6.163 - 6.210: 97.9460% ( 12) 00:23:46.948 6.210 - 6.258: 98.0384% ( 9) 00:23:46.948 6.258 - 6.305: 98.1206% ( 8) 00:23:46.948 6.305 - 6.353: 98.1822% ( 6) 00:23:46.948 6.353 - 6.400: 98.3054% ( 12) 00:23:46.948 6.400 - 6.447: 98.3979% ( 9) 00:23:46.948 6.447 - 6.495: 98.4698% ( 7) 00:23:46.948 6.495 - 6.542: 98.5211% ( 5) 00:23:46.948 6.542 - 6.590: 98.5725% ( 5) 00:23:46.948 6.590 - 6.637: 98.6546% ( 8) 00:23:46.948 6.637 - 6.684: 98.6854% ( 3) 00:23:46.948 6.684 - 6.732: 98.7265% ( 4) 00:23:46.948 6.732 - 6.779: 98.7573% ( 3) 00:23:46.948 6.779 - 6.827: 98.7984% ( 4) 00:23:46.948 6.827 - 6.874: 98.8395% ( 4) 00:23:46.948 6.874 - 6.921: 98.8497% ( 1) 00:23:46.948 6.921 - 6.969: 98.8600% ( 1) 00:23:46.948 6.969 - 7.016: 98.8908% ( 3) 00:23:46.948 7.016 - 7.064: 98.9422% ( 5) 00:23:46.948 7.064 - 7.111: 98.9833% ( 4) 00:23:46.948 7.206 - 7.253: 98.9935% ( 1) 00:23:46.948 7.301 - 7.348: 99.0038% ( 1) 00:23:46.948 7.443 - 7.490: 99.0141% ( 1) 00:23:46.948 7.490 - 7.538: 99.0346% ( 2) 00:23:46.948 7.538 - 7.585: 99.0552% ( 2) 00:23:46.948 7.870 - 7.917: 99.0654% ( 1) 00:23:46.948 8.012 - 8.059: 99.0757% ( 1) 00:23:46.948 8.201 - 8.249: 99.0860% ( 1) 00:23:46.948 8.391 - 8.439: 99.0962% ( 1) 00:23:46.948 8.486 - 8.533: 99.1168% ( 2) 00:23:46.948 8.581 - 8.628: 99.1270% ( 1) 00:23:46.948 8.913 - 8.960: 99.1373% ( 1) 00:23:46.948 9.150 - 9.197: 99.1476% ( 1) 00:23:46.948 9.197 - 9.244: 99.1579% ( 1) 00:23:46.948 9.292 - 9.339: 99.1681% ( 1) 00:23:46.948 9.576 - 9.624: 99.1989% ( 3) 00:23:46.948 9.719 - 9.766: 99.2092% ( 1) 00:23:46.948 9.861 - 9.908: 99.2297% ( 2) 00:23:46.948 9.908 - 9.956: 99.2503% ( 2) 00:23:46.948 9.956 - 10.003: 99.2606% ( 1) 00:23:46.948 10.003 - 10.050: 99.2708% ( 1) 00:23:46.948 10.098 - 10.145: 99.2811% ( 1) 00:23:46.948 10.145 - 10.193: 99.2914% ( 1) 00:23:46.948 10.193 - 10.240: 99.3016% ( 1) 00:23:46.948 10.287 - 10.335: 99.3119% ( 1) 00:23:46.948 10.335 - 10.382: 99.3427% ( 3) 00:23:46.948 10.382 - 10.430: 99.3633% ( 2) 00:23:46.948 10.430 - 10.477: 99.3735% ( 1) 00:23:46.948 10.619 - 10.667: 99.3941% ( 2) 00:23:46.948 10.667 - 10.714: 99.4043% ( 1) 00:23:46.948 10.761 - 10.809: 99.4146% ( 1) 00:23:46.948 10.856 - 10.904: 99.4249% ( 1) 00:23:46.948 10.904 - 10.951: 99.4454% ( 2) 00:23:46.948 10.999 - 11.046: 99.4557% ( 1) 00:23:46.948 11.141 - 11.188: 99.4865% ( 3) 00:23:46.948 11.283 - 11.330: 99.4968% ( 1) 00:23:46.948 11.330 - 11.378: 99.5070% ( 1) 00:23:46.948 11.425 - 11.473: 99.5276% ( 2) 00:23:46.948 11.567 - 11.615: 99.5378% ( 1) 00:23:46.948 11.852 - 11.899: 99.5584% ( 2) 00:23:46.948 11.947 - 11.994: 99.5687% ( 1) 00:23:46.948 12.089 - 12.136: 99.5789% ( 1) 00:23:46.948 12.136 - 12.231: 99.5892% ( 1) 00:23:46.948 12.516 - 12.610: 99.5995% ( 1) 00:23:46.948 12.610 - 12.705: 99.6097% ( 1) 00:23:46.948 12.705 - 12.800: 99.6405% ( 3) 00:23:46.948 12.895 - 12.990: 99.6714% ( 3) 00:23:46.948 13.084 - 13.179: 99.6816% ( 1) 00:23:46.948 13.748 - 13.843: 99.7022% ( 2) 00:23:46.948 14.127 - 14.222: 99.7227% ( 2) 00:23:46.948 14.412 - 14.507: 99.7330% ( 1) 00:23:46.948 14.886 - 14.981: 99.7432% ( 1) 00:23:46.948 15.076 - 15.170: 99.7535% ( 1) 00:23:46.948 15.170 - 15.265: 99.7638% ( 1) 00:23:46.948 15.644 - 15.739: 99.7741% ( 1) 00:23:46.948 16.119 - 16.213: 99.7843% ( 1) 00:23:46.948 16.498 - 16.593: 99.8049% ( 2) 00:23:46.948 16.877 - 16.972: 99.8151% ( 1) 00:23:46.948 17.825 - 17.920: 99.8254% ( 1) 00:23:46.948 21.049 - 21.144: 99.8357% ( 1) 00:23:46.948 3980.705 - 4004.978: 99.9384% ( 10) 00:23:46.948 4004.978 - 4029.250: 100.0000% ( 6) 00:23:46.949 00:23:46.949 Complete histogram 00:23:46.949 ================== 00:23:46.949 Range in us Cumulative Count 00:23:46.949 3.010 - 3.022: 0.6367% ( 62) 00:23:46.949 3.022 - 3.034: 15.4257% ( 1440) 00:23:46.949 3.034 - 3.058: 51.4840% ( 3511) 00:23:46.949 3.058 - 3.081: 67.0227% ( 1513) 00:23:46.949 3.081 - 3.105: 89.2369% ( 2163) 00:23:46.949 3.105 - 3.129: 94.7828% ( 540) 00:23:46.949 3.129 - 3.153: 97.6071% ( 275) 00:23:46.949 3.153 - 3.176: 98.0179% ( 40) 00:23:46.949 3.176 - 3.200: 98.2027% ( 18) 00:23:46.949 3.200 - 3.224: 98.2541% ( 5) 00:23:46.949 3.224 - 3.247: 98.2849% ( 3) 00:23:46.949 3.247 - 3.271: 98.2952% ( 1) 00:23:46.949 3.271 - 3.295: 98.3054% ( 1) 00:23:46.949 3.366 - 3.390: 98.3260% ( 2) 00:23:46.949 3.390 - 3.413: 98.3362% ( 1) 00:23:46.949 3.413 - 3.437: 98.3773% ( 4) 00:23:46.949 3.437 - 3.461: 98.3876% ( 1) 00:23:46.949 3.461 - 3.484: 98.4081% ( 2) 00:23:46.949 3.484 - 3.508: 98.4800% ( 7) 00:23:46.949 3.508 - 3.532: 98.5108% ( 3) 00:23:46.949 3.532 - 3.556: 98.5314% ( 2) 00:23:46.949 3.556 - 3.579: 98.5622% ( 3) 00:23:46.949 3.579 - 3.603: 98.5827% ( 2) 00:23:46.949 3.674 - 3.698: 98.5930% ( 1) 00:23:46.949 3.911 - 3.935: 98.6033% ( 1) 00:23:46.949 3.935 - 3.959: 98.6135% ( 1) 00:23:46.949 3.982 - 4.006: 98.6238% ( 1) 00:23:46.949 4.196 - 4.219: 98.6341% ( 1) 00:23:46.949 4.219 - 4.243: 98.6443% ( 1) 00:23:46.949 4.527 - 4.551: 98.6546% ( 1) 00:23:46.949 4.551 - 4.575: 98.6649% ( 1) 00:23:46.949 4.575 - 4.599: 98.6752% ( 1) 00:23:46.949 4.599 - 4.622: 98.6957% ( 2) 00:23:46.949 4.622 - 4.646: 98.7060% ( 1) 00:23:46.949 4.646 - 4.670: 98.7368% ( 3) 00:23:46.949 4.670 - 4.693: 98.7779% ( 4) 00:23:46.949 4.693 - 4.717: 98.7881% ( 1) 00:23:46.949 4.741 - 4.764: 98.8189% ( 3) 00:23:46.949 4.788 - 4.812: 98.8292% ( 1) 00:23:46.949 4.859 - 4.8[2024-07-22 23:04:22.856315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:46.949 83: 98.8600% ( 3) 00:23:46.949 4.883 - 4.907: 98.8703% ( 1) 00:23:46.949 4.954 - 4.978: 98.8806% ( 1) 00:23:46.949 5.001 - 5.025: 98.9011% ( 2) 00:23:46.949 5.120 - 5.144: 98.9114% ( 1) 00:23:46.949 5.167 - 5.191: 98.9216% ( 1) 00:23:46.949 5.191 - 5.215: 98.9422% ( 2) 00:23:46.949 5.215 - 5.239: 98.9524% ( 1) 00:23:46.949 5.476 - 5.499: 98.9627% ( 1) 00:23:46.949 5.902 - 5.926: 98.9730% ( 1) 00:23:46.949 5.997 - 6.021: 98.9833% ( 1) 00:23:46.949 6.542 - 6.590: 98.9935% ( 1) 00:23:46.949 6.732 - 6.779: 99.0141% ( 2) 00:23:46.949 6.874 - 6.921: 99.0243% ( 1) 00:23:46.949 7.396 - 7.443: 99.0346% ( 1) 00:23:46.949 7.538 - 7.585: 99.0449% ( 1) 00:23:46.949 7.680 - 7.727: 99.0552% ( 1) 00:23:46.949 7.727 - 7.775: 99.0654% ( 1) 00:23:46.949 7.917 - 7.964: 99.0757% ( 1) 00:23:46.949 8.012 - 8.059: 99.0860% ( 1) 00:23:46.949 8.201 - 8.249: 99.0962% ( 1) 00:23:46.949 8.391 - 8.439: 99.1065% ( 1) 00:23:46.949 8.676 - 8.723: 99.1168% ( 1) 00:23:46.949 8.770 - 8.818: 99.1270% ( 1) 00:23:46.949 8.818 - 8.865: 99.1373% ( 1) 00:23:46.949 8.865 - 8.913: 99.1476% ( 1) 00:23:46.949 8.913 - 8.960: 99.1579% ( 1) 00:23:46.949 9.007 - 9.055: 99.1681% ( 1) 00:23:46.949 9.244 - 9.292: 99.1784% ( 1) 00:23:46.949 9.861 - 9.908: 99.1887% ( 1) 00:23:46.949 10.382 - 10.430: 99.1989% ( 1) 00:23:46.949 12.231 - 12.326: 99.2092% ( 1) 00:23:46.949 17.541 - 17.636: 99.2195% ( 1) 00:23:46.949 2014.625 - 2026.761: 99.2297% ( 1) 00:23:46.949 3980.705 - 4004.978: 99.9178% ( 67) 00:23:46.949 4004.978 - 4029.250: 99.9897% ( 7) 00:23:46.949 5995.330 - 6019.603: 100.0000% ( 1) 00:23:46.949 00:23:46.949 23:04:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:23:46.949 23:04:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:23:46.949 23:04:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:23:46.949 23:04:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:23:46.949 23:04:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:23:47.211 [ 00:23:47.211 { 00:23:47.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.211 "subtype": "Discovery", 00:23:47.211 "listen_addresses": [], 00:23:47.211 "allow_any_host": true, 00:23:47.211 "hosts": [] 00:23:47.211 }, 00:23:47.211 { 00:23:47.211 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:23:47.211 "subtype": "NVMe", 00:23:47.211 "listen_addresses": [ 00:23:47.211 { 00:23:47.211 "trtype": "VFIOUSER", 00:23:47.211 "adrfam": "IPv4", 00:23:47.211 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:23:47.211 "trsvcid": "0" 00:23:47.211 } 00:23:47.211 ], 00:23:47.211 "allow_any_host": true, 00:23:47.211 "hosts": [], 00:23:47.211 "serial_number": "SPDK1", 00:23:47.211 "model_number": "SPDK bdev Controller", 00:23:47.211 "max_namespaces": 32, 00:23:47.211 "min_cntlid": 1, 00:23:47.211 "max_cntlid": 65519, 00:23:47.211 "namespaces": [ 00:23:47.211 { 00:23:47.211 "nsid": 1, 00:23:47.211 "bdev_name": "Malloc1", 00:23:47.211 "name": "Malloc1", 00:23:47.211 "nguid": "ED50B53EE47144C3989D9A964E82761F", 00:23:47.211 "uuid": "ed50b53e-e471-44c3-989d-9a964e82761f" 00:23:47.211 }, 00:23:47.211 { 00:23:47.211 "nsid": 2, 00:23:47.211 "bdev_name": "Malloc3", 00:23:47.211 "name": "Malloc3", 00:23:47.211 "nguid": "AACB462E59AD49EDAC86182B6D4361AA", 00:23:47.211 "uuid": "aacb462e-59ad-49ed-ac86-182b6d4361aa" 00:23:47.211 } 00:23:47.211 ] 00:23:47.211 }, 00:23:47.211 { 00:23:47.211 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:23:47.211 "subtype": "NVMe", 00:23:47.211 "listen_addresses": [ 00:23:47.211 { 00:23:47.211 "trtype": "VFIOUSER", 00:23:47.211 "adrfam": "IPv4", 00:23:47.211 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:23:47.211 "trsvcid": "0" 00:23:47.211 } 00:23:47.211 ], 00:23:47.211 "allow_any_host": true, 00:23:47.211 "hosts": [], 00:23:47.211 "serial_number": "SPDK2", 00:23:47.211 "model_number": "SPDK bdev Controller", 00:23:47.211 "max_namespaces": 32, 00:23:47.211 "min_cntlid": 1, 00:23:47.211 "max_cntlid": 65519, 00:23:47.211 "namespaces": [ 00:23:47.211 { 00:23:47.211 "nsid": 1, 00:23:47.211 "bdev_name": "Malloc2", 00:23:47.211 "name": "Malloc2", 00:23:47.211 "nguid": "A7160DFEF1F04E09877DA1F606A7E372", 00:23:47.211 "uuid": "a7160dfe-f1f0-4e09-877d-a1f606a7e372" 00:23:47.211 } 00:23:47.211 ] 00:23:47.211 } 00:23:47.211 ] 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=889869 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:23:47.472 23:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:23:47.472 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.472 [2024-07-22 23:04:23.721919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:23:48.045 Malloc4 00:23:48.045 23:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:23:48.613 [2024-07-22 23:04:24.686445] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:23:48.613 23:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:23:48.613 Asynchronous Event Request test 00:23:48.613 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:23:48.613 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:23:48.613 Registering asynchronous event callbacks... 00:23:48.613 Starting namespace attribute notice tests for all controllers... 00:23:48.613 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:48.613 aer_cb - Changed Namespace 00:23:48.613 Cleaning up... 00:23:49.182 [ 00:23:49.182 { 00:23:49.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:49.182 "subtype": "Discovery", 00:23:49.182 "listen_addresses": [], 00:23:49.182 "allow_any_host": true, 00:23:49.182 "hosts": [] 00:23:49.182 }, 00:23:49.182 { 00:23:49.182 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:23:49.182 "subtype": "NVMe", 00:23:49.182 "listen_addresses": [ 00:23:49.182 { 00:23:49.182 "trtype": "VFIOUSER", 00:23:49.182 "adrfam": "IPv4", 00:23:49.182 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:23:49.182 "trsvcid": "0" 00:23:49.182 } 00:23:49.182 ], 00:23:49.182 "allow_any_host": true, 00:23:49.182 "hosts": [], 00:23:49.182 "serial_number": "SPDK1", 00:23:49.182 "model_number": "SPDK bdev Controller", 00:23:49.182 "max_namespaces": 32, 00:23:49.182 "min_cntlid": 1, 00:23:49.182 "max_cntlid": 65519, 00:23:49.182 "namespaces": [ 00:23:49.182 { 00:23:49.182 "nsid": 1, 00:23:49.182 "bdev_name": "Malloc1", 00:23:49.182 "name": "Malloc1", 00:23:49.182 "nguid": "ED50B53EE47144C3989D9A964E82761F", 00:23:49.182 "uuid": "ed50b53e-e471-44c3-989d-9a964e82761f" 00:23:49.182 }, 00:23:49.182 { 00:23:49.182 "nsid": 2, 00:23:49.182 "bdev_name": "Malloc3", 00:23:49.182 "name": "Malloc3", 00:23:49.182 "nguid": "AACB462E59AD49EDAC86182B6D4361AA", 00:23:49.182 "uuid": "aacb462e-59ad-49ed-ac86-182b6d4361aa" 00:23:49.182 } 00:23:49.182 ] 00:23:49.182 }, 00:23:49.182 { 00:23:49.182 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:23:49.182 "subtype": "NVMe", 00:23:49.182 "listen_addresses": [ 00:23:49.182 { 00:23:49.182 "trtype": "VFIOUSER", 00:23:49.182 "adrfam": "IPv4", 00:23:49.182 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:23:49.182 "trsvcid": "0" 00:23:49.182 } 00:23:49.182 ], 00:23:49.182 "allow_any_host": true, 00:23:49.182 "hosts": [], 00:23:49.182 "serial_number": "SPDK2", 00:23:49.182 "model_number": "SPDK bdev Controller", 00:23:49.182 "max_namespaces": 32, 00:23:49.182 "min_cntlid": 1, 00:23:49.182 "max_cntlid": 65519, 00:23:49.182 "namespaces": [ 00:23:49.182 { 00:23:49.182 "nsid": 1, 00:23:49.182 "bdev_name": "Malloc2", 00:23:49.182 "name": "Malloc2", 00:23:49.182 "nguid": "A7160DFEF1F04E09877DA1F606A7E372", 00:23:49.182 "uuid": "a7160dfe-f1f0-4e09-877d-a1f606a7e372" 00:23:49.182 }, 00:23:49.182 { 00:23:49.182 "nsid": 2, 00:23:49.182 "bdev_name": "Malloc4", 00:23:49.182 "name": "Malloc4", 00:23:49.182 "nguid": "B5F79F09A92541FCBEBD827271E38CD1", 00:23:49.182 "uuid": "b5f79f09-a925-41fc-bebd-827271e38cd1" 00:23:49.182 } 00:23:49.182 ] 00:23:49.182 } 00:23:49.182 ] 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 889869 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 883992 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 883992 ']' 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 883992 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883992 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883992' 00:23:49.182 killing process with pid 883992 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 883992 00:23:49.182 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 883992 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=890098 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 890098' 00:23:49.752 Process pid: 890098 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 890098 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 890098 ']' 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.752 23:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:23:49.752 [2024-07-22 23:04:25.892113] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:49.752 [2024-07-22 23:04:25.894837] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:23:49.752 [2024-07-22 23:04:25.894968] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.752 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.012 [2024-07-22 23:04:26.076176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.012 [2024-07-22 23:04:26.251248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.012 [2024-07-22 23:04:26.251413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.012 [2024-07-22 23:04:26.251455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.012 [2024-07-22 23:04:26.251501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.012 [2024-07-22 23:04:26.251532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.012 [2024-07-22 23:04:26.251707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.012 [2024-07-22 23:04:26.251795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.012 [2024-07-22 23:04:26.251876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.012 [2024-07-22 23:04:26.251886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.273 [2024-07-22 23:04:26.442410] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:50.273 [2024-07-22 23:04:26.442720] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:50.273 [2024-07-22 23:04:26.443136] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:50.273 [2024-07-22 23:04:26.444052] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:50.273 [2024-07-22 23:04:26.444456] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:23:50.273 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.273 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:23:50.273 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:23:51.652 23:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:23:51.910 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:23:51.910 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:23:51.910 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:51.910 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:23:51.910 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:52.169 Malloc1 00:23:52.426 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:23:52.684 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:23:53.252 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:23:53.820 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:23:53.820 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:23:53.820 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:23:54.388 Malloc2 00:23:54.388 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:23:54.955 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:23:55.214 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 890098 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 890098 ']' 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 890098 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890098 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890098' 00:23:55.783 killing process with pid 890098 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 890098 00:23:55.783 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 890098 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:23:56.351 00:23:56.351 real 1m2.097s 00:23:56.351 user 4m6.906s 00:23:56.351 sys 0m7.309s 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:23:56.351 ************************************ 00:23:56.351 END TEST nvmf_vfio_user 00:23:56.351 ************************************ 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:56.351 ************************************ 00:23:56.351 START TEST nvmf_vfio_user_nvme_compliance 00:23:56.351 ************************************ 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:23:56.351 * Looking for test storage... 00:23:56.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=890955 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 890955' 00:23:56.351 Process pid: 890955 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 890955 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 890955 ']' 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.351 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:23:56.610 [2024-07-22 23:04:32.670374] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:23:56.610 [2024-07-22 23:04:32.670472] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.610 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.610 [2024-07-22 23:04:32.747830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:56.610 [2024-07-22 23:04:32.884967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.610 [2024-07-22 23:04:32.885071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.610 [2024-07-22 23:04:32.885106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.610 [2024-07-22 23:04:32.885135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.610 [2024-07-22 23:04:32.885161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.610 [2024-07-22 23:04:32.885283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.610 [2024-07-22 23:04:32.885352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.610 [2024-07-22 23:04:32.885357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.869 23:04:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.869 23:04:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:23:56.869 23:04:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:23:57.808 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:23:57.808 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:23:57.809 malloc0 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.809 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.068 23:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:23:58.068 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.068 00:23:58.068 00:23:58.068 CUnit - A unit testing framework for C - Version 2.1-3 00:23:58.068 http://cunit.sourceforge.net/ 00:23:58.068 00:23:58.068 00:23:58.068 Suite: nvme_compliance 00:23:58.328 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-22 23:04:34.384001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:58.328 [2024-07-22 23:04:34.385655] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:23:58.328 [2024-07-22 23:04:34.385692] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:23:58.328 [2024-07-22 23:04:34.385710] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:23:58.328 [2024-07-22 23:04:34.389051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:58.328 passed 00:23:58.328 Test: admin_identify_ctrlr_verify_fused ...[2024-07-22 23:04:34.491840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:58.328 [2024-07-22 23:04:34.494870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:58.328 passed 00:23:58.328 Test: admin_identify_ns ...[2024-07-22 23:04:34.600143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:58.588 [2024-07-22 23:04:34.661338] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:23:58.588 [2024-07-22 23:04:34.669335] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:23:58.588 [2024-07-22 23:04:34.690514] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:58.588 passed 00:23:58.588 Test: admin_get_features_mandatory_features ...[2024-07-22 23:04:34.792378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:58.588 [2024-07-22 23:04:34.795401] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:58.588 passed 00:23:58.588 Test: admin_get_features_optional_features ...[2024-07-22 23:04:34.899170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:58.848 [2024-07-22 23:04:34.902194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:58.849 passed 00:23:58.849 Test: admin_set_features_number_of_queues ...[2024-07-22 23:04:35.004325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:58.849 [2024-07-22 23:04:35.111464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:58.849 passed 00:23:59.108 Test: admin_get_log_page_mandatory_logs ...[2024-07-22 23:04:35.213294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:59.108 [2024-07-22 23:04:35.219341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:59.108 passed 00:23:59.108 Test: admin_get_log_page_with_lpo ...[2024-07-22 23:04:35.319122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:59.108 [2024-07-22 23:04:35.390329] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:23:59.108 [2024-07-22 23:04:35.403443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:59.366 passed 00:23:59.366 Test: fabric_property_get ...[2024-07-22 23:04:35.505307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:59.366 [2024-07-22 23:04:35.506718] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:23:59.366 [2024-07-22 23:04:35.508343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:59.366 passed 00:23:59.366 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-22 23:04:35.612143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:59.366 [2024-07-22 23:04:35.613550] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:23:59.366 [2024-07-22 23:04:35.615168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:59.366 passed 00:23:59.626 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-22 23:04:35.717119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:59.626 [2024-07-22 23:04:35.802330] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:23:59.626 [2024-07-22 23:04:35.818328] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:23:59.626 [2024-07-22 23:04:35.823456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:59.626 passed 00:23:59.626 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-22 23:04:35.924376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:59.626 [2024-07-22 23:04:35.925841] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:23:59.626 [2024-07-22 23:04:35.927428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:59.886 passed 00:23:59.886 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-22 23:04:36.031133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:23:59.886 [2024-07-22 23:04:36.108326] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:23:59.886 [2024-07-22 23:04:36.132328] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:23:59.886 [2024-07-22 23:04:36.137468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:23:59.886 passed 00:24:00.146 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-22 23:04:36.239349] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:00.146 [2024-07-22 23:04:36.240750] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:24:00.146 [2024-07-22 23:04:36.240805] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:24:00.146 [2024-07-22 23:04:36.242383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:00.146 passed 00:24:00.146 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-22 23:04:36.345112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:00.146 [2024-07-22 23:04:36.436344] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:24:00.146 [2024-07-22 23:04:36.444324] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:24:00.146 [2024-07-22 23:04:36.452346] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:24:00.406 [2024-07-22 23:04:36.460327] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:24:00.406 [2024-07-22 23:04:36.489466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:00.406 passed 00:24:00.406 Test: admin_create_io_sq_verify_pc ...[2024-07-22 23:04:36.591326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:00.406 [2024-07-22 23:04:36.609339] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:24:00.406 [2024-07-22 23:04:36.627140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:00.406 passed 00:24:00.666 Test: admin_create_io_qp_max_qps ...[2024-07-22 23:04:36.729963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:01.664 [2024-07-22 23:04:37.840334] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:24:01.924 [2024-07-22 23:04:38.213276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:02.183 passed 00:24:02.183 Test: admin_create_io_sq_shared_cq ...[2024-07-22 23:04:38.315500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:24:02.183 [2024-07-22 23:04:38.451321] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:24:02.183 [2024-07-22 23:04:38.488443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:24:02.443 passed 00:24:02.443 00:24:02.443 Run Summary: Type Total Ran Passed Failed Inactive 00:24:02.443 suites 1 1 n/a 0 0 00:24:02.443 tests 18 18 18 0 0 00:24:02.443 asserts 360 360 360 0 n/a 00:24:02.443 00:24:02.443 Elapsed time = 1.740 seconds 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 890955 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 890955 ']' 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 890955 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890955 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890955' 00:24:02.443 killing process with pid 890955 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 890955 00:24:02.443 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 890955 00:24:02.703 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:24:02.703 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:02.703 00:24:02.703 real 0m6.483s 00:24:02.703 user 0m17.977s 00:24:02.703 sys 0m0.756s 00:24:02.703 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.703 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:24:02.703 ************************************ 00:24:02.703 END TEST nvmf_vfio_user_nvme_compliance 00:24:02.703 ************************************ 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:02.965 ************************************ 00:24:02.965 START TEST nvmf_vfio_user_fuzz 00:24:02.965 ************************************ 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:24:02.965 * Looking for test storage... 00:24:02.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=891744 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 891744' 00:24:02.965 Process pid: 891744 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 891744 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 891744 ']' 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.965 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.535 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.535 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:03.535 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:24:04.473 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:24:04.473 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.473 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:04.733 malloc0 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:24:04.733 23:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:24:36.824 Fuzzing completed. Shutting down the fuzz application 00:24:36.824 00:24:36.824 Dumping successful admin opcodes: 00:24:36.824 8, 9, 10, 24, 00:24:36.824 Dumping successful io opcodes: 00:24:36.824 0, 00:24:36.824 NS: 0x200003a1ef00 I/O qp, Total commands completed: 325823, total successful commands: 1287, random_seed: 3755428864 00:24:36.824 NS: 0x200003a1ef00 admin qp, Total commands completed: 49445, total successful commands: 397, random_seed: 2886650944 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 891744 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 891744 ']' 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 891744 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891744 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891744' 00:24:36.824 killing process with pid 891744 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 891744 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 891744 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:24:36.824 00:24:36.824 real 0m32.820s 00:24:36.824 user 0m31.284s 00:24:36.824 sys 0m20.992s 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:36.824 ************************************ 00:24:36.824 END TEST nvmf_vfio_user_fuzz 00:24:36.824 ************************************ 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.824 ************************************ 00:24:36.824 START TEST nvmf_auth_target 00:24:36.824 ************************************ 00:24:36.824 23:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:24:36.824 * Looking for test storage... 00:24:36.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.824 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.824 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:24:36.824 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.824 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.824 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.824 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.825 23:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:39.365 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:39.365 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:39.365 Found net devices under 0000:84:00.0: cvl_0_0 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:39.365 Found net devices under 0000:84:00.1: cvl_0_1 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.365 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:39.366 00:24:39.366 --- 10.0.0.2 ping statistics --- 00:24:39.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.366 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:24:39.366 00:24:39.366 --- 10.0.0.1 ping statistics --- 00:24:39.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.366 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=897108 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 897108 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 897108 ']' 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.366 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.626 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.626 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:39.626 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.626 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:39.626 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=897139 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=361392811a5bfc1d83b69191544d010c682866a2615fdf7c 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.guh 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 361392811a5bfc1d83b69191544d010c682866a2615fdf7c 0 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 361392811a5bfc1d83b69191544d010c682866a2615fdf7c 0 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=361392811a5bfc1d83b69191544d010c682866a2615fdf7c 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:24:39.886 23:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.guh 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.guh 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.guh 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3f06ebce0ebfc32b6f8079862daa3441ba1cf937f9118faa3c92d511bffb5775 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ljk 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3f06ebce0ebfc32b6f8079862daa3441ba1cf937f9118faa3c92d511bffb5775 3 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3f06ebce0ebfc32b6f8079862daa3441ba1cf937f9118faa3c92d511bffb5775 3 00:24:39.886 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:39.887 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:39.887 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3f06ebce0ebfc32b6f8079862daa3441ba1cf937f9118faa3c92d511bffb5775 00:24:39.887 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:24:39.887 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ljk 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ljk 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ljk 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9dbf9a4e9693e58fe01c92764846d06e 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.del 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9dbf9a4e9693e58fe01c92764846d06e 1 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9dbf9a4e9693e58fe01c92764846d06e 1 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9dbf9a4e9693e58fe01c92764846d06e 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.del 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.del 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.del 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4d431bdf8fd79f79dd3e46241cb0cce6b0d65da44ab1434f 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jsj 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4d431bdf8fd79f79dd3e46241cb0cce6b0d65da44ab1434f 2 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4d431bdf8fd79f79dd3e46241cb0cce6b0d65da44ab1434f 2 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4d431bdf8fd79f79dd3e46241cb0cce6b0d65da44ab1434f 00:24:40.145 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jsj 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jsj 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.jsj 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e547bf951ce4a5b56bd8cd832036d540bbca584b912a2f0a 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ExD 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e547bf951ce4a5b56bd8cd832036d540bbca584b912a2f0a 2 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e547bf951ce4a5b56bd8cd832036d540bbca584b912a2f0a 2 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e547bf951ce4a5b56bd8cd832036d540bbca584b912a2f0a 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ExD 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ExD 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ExD 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=999b93cd8ef400f97ce264addca65cc6 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PuN 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 999b93cd8ef400f97ce264addca65cc6 1 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 999b93cd8ef400f97ce264addca65cc6 1 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=999b93cd8ef400f97ce264addca65cc6 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PuN 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PuN 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.PuN 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a67e38012b20ecac68912b420d9529524c9be070afc6a04f8b0272b3c686c1be 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GFO 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a67e38012b20ecac68912b420d9529524c9be070afc6a04f8b0272b3c686c1be 3 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a67e38012b20ecac68912b420d9529524c9be070afc6a04f8b0272b3c686c1be 3 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a67e38012b20ecac68912b420d9529524c9be070afc6a04f8b0272b3c686c1be 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:24:40.146 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GFO 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GFO 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.GFO 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 897108 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 897108 ']' 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.403 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 897139 /var/tmp/host.sock 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 897139 ']' 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:24:40.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.662 23:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.guh 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.guh 00:24:41.232 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.guh 00:24:41.801 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ljk ]] 00:24:41.801 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ljk 00:24:41.801 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.801 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.801 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.801 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ljk 00:24:41.801 23:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ljk 00:24:42.059 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:42.059 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.del 00:24:42.059 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.059 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.059 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.059 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.del 00:24:42.059 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.del 00:24:42.317 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.jsj ]] 00:24:42.317 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jsj 00:24:42.317 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.317 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.317 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.317 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jsj 00:24:42.317 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jsj 00:24:42.885 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:42.885 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ExD 00:24:42.885 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.885 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.885 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.885 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ExD 00:24:42.885 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ExD 00:24:43.454 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.PuN ]] 00:24:43.454 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PuN 00:24:43.454 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.454 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.454 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.454 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PuN 00:24:43.454 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PuN 00:24:43.712 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:43.712 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GFO 00:24:43.712 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.712 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.712 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.712 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.GFO 00:24:43.712 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.GFO 00:24:43.969 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:24:43.969 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:43.969 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.969 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:43.969 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:43.969 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.536 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.794 00:24:44.794 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:44.794 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:44.794 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.052 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.052 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:45.053 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.053 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.053 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.053 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:45.053 { 00:24:45.053 "cntlid": 1, 00:24:45.053 "qid": 0, 00:24:45.053 "state": "enabled", 00:24:45.053 "thread": "nvmf_tgt_poll_group_000", 00:24:45.053 "listen_address": { 00:24:45.053 "trtype": "TCP", 00:24:45.053 "adrfam": "IPv4", 00:24:45.053 "traddr": "10.0.0.2", 00:24:45.053 "trsvcid": "4420" 00:24:45.053 }, 00:24:45.053 "peer_address": { 00:24:45.053 "trtype": "TCP", 00:24:45.053 "adrfam": "IPv4", 00:24:45.053 "traddr": "10.0.0.1", 00:24:45.053 "trsvcid": "41766" 00:24:45.053 }, 00:24:45.053 "auth": { 00:24:45.053 "state": "completed", 00:24:45.053 "digest": "sha256", 00:24:45.053 "dhgroup": "null" 00:24:45.053 } 00:24:45.053 } 00:24:45.053 ]' 00:24:45.053 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:45.053 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:45.053 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:45.312 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:45.312 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:45.312 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:45.312 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:45.312 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:45.891 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:47.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:47.803 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.060 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.317 00:24:48.317 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:48.317 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:48.317 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:49.253 { 00:24:49.253 "cntlid": 3, 00:24:49.253 "qid": 0, 00:24:49.253 "state": "enabled", 00:24:49.253 "thread": "nvmf_tgt_poll_group_000", 00:24:49.253 "listen_address": { 00:24:49.253 "trtype": "TCP", 00:24:49.253 "adrfam": "IPv4", 00:24:49.253 "traddr": "10.0.0.2", 00:24:49.253 "trsvcid": "4420" 00:24:49.253 }, 00:24:49.253 "peer_address": { 00:24:49.253 "trtype": "TCP", 00:24:49.253 "adrfam": "IPv4", 00:24:49.253 "traddr": "10.0.0.1", 00:24:49.253 "trsvcid": "41794" 00:24:49.253 }, 00:24:49.253 "auth": { 00:24:49.253 "state": "completed", 00:24:49.253 "digest": "sha256", 00:24:49.253 "dhgroup": "null" 00:24:49.253 } 00:24:49.253 } 00:24:49.253 ]' 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:49.253 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:49.821 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:51.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:51.200 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:51.769 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.770 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.338 00:24:52.339 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:52.339 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:52.339 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:52.907 { 00:24:52.907 "cntlid": 5, 00:24:52.907 "qid": 0, 00:24:52.907 "state": "enabled", 00:24:52.907 "thread": "nvmf_tgt_poll_group_000", 00:24:52.907 "listen_address": { 00:24:52.907 "trtype": "TCP", 00:24:52.907 "adrfam": "IPv4", 00:24:52.907 "traddr": "10.0.0.2", 00:24:52.907 "trsvcid": "4420" 00:24:52.907 }, 00:24:52.907 "peer_address": { 00:24:52.907 "trtype": "TCP", 00:24:52.907 "adrfam": "IPv4", 00:24:52.907 "traddr": "10.0.0.1", 00:24:52.907 "trsvcid": "41090" 00:24:52.907 }, 00:24:52.907 "auth": { 00:24:52.907 "state": "completed", 00:24:52.907 "digest": "sha256", 00:24:52.907 "dhgroup": "null" 00:24:52.907 } 00:24:52.907 } 00:24:52.907 ]' 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:52.907 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:53.166 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:53.166 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:53.166 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:53.166 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:53.166 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:53.735 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:55.114 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.681 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:56.618 00:24:56.618 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:56.618 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:56.618 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:57.189 { 00:24:57.189 "cntlid": 7, 00:24:57.189 "qid": 0, 00:24:57.189 "state": "enabled", 00:24:57.189 "thread": "nvmf_tgt_poll_group_000", 00:24:57.189 "listen_address": { 00:24:57.189 "trtype": "TCP", 00:24:57.189 "adrfam": "IPv4", 00:24:57.189 "traddr": "10.0.0.2", 00:24:57.189 "trsvcid": "4420" 00:24:57.189 }, 00:24:57.189 "peer_address": { 00:24:57.189 "trtype": "TCP", 00:24:57.189 "adrfam": "IPv4", 00:24:57.189 "traddr": "10.0.0.1", 00:24:57.189 "trsvcid": "41110" 00:24:57.189 }, 00:24:57.189 "auth": { 00:24:57.189 "state": "completed", 00:24:57.189 "digest": "sha256", 00:24:57.189 "dhgroup": "null" 00:24:57.189 } 00:24:57.189 } 00:24:57.189 ]' 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:57.189 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:58.127 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:59.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:59.502 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:59.503 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.503 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.503 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.503 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.503 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.503 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.440 00:25:00.441 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:00.441 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:00.441 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:00.699 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.699 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:00.699 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.700 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.700 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.700 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:00.700 { 00:25:00.700 "cntlid": 9, 00:25:00.700 "qid": 0, 00:25:00.700 "state": "enabled", 00:25:00.700 "thread": "nvmf_tgt_poll_group_000", 00:25:00.700 "listen_address": { 00:25:00.700 "trtype": "TCP", 00:25:00.700 "adrfam": "IPv4", 00:25:00.700 "traddr": "10.0.0.2", 00:25:00.700 "trsvcid": "4420" 00:25:00.700 }, 00:25:00.700 "peer_address": { 00:25:00.700 "trtype": "TCP", 00:25:00.700 "adrfam": "IPv4", 00:25:00.700 "traddr": "10.0.0.1", 00:25:00.700 "trsvcid": "41130" 00:25:00.700 }, 00:25:00.700 "auth": { 00:25:00.700 "state": "completed", 00:25:00.700 "digest": "sha256", 00:25:00.700 "dhgroup": "ffdhe2048" 00:25:00.700 } 00:25:00.700 } 00:25:00.700 ]' 00:25:00.700 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:00.958 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:00.958 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:00.958 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:00.958 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:00.958 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:00.958 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:00.958 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:01.218 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:02.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:02.594 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:02.853 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:25:02.853 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.853 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.421 00:25:03.421 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:03.421 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:03.421 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:03.990 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.990 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:03.990 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.990 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:03.990 { 00:25:03.990 "cntlid": 11, 00:25:03.990 "qid": 0, 00:25:03.990 "state": "enabled", 00:25:03.990 "thread": "nvmf_tgt_poll_group_000", 00:25:03.990 "listen_address": { 00:25:03.990 "trtype": "TCP", 00:25:03.990 "adrfam": "IPv4", 00:25:03.990 "traddr": "10.0.0.2", 00:25:03.990 "trsvcid": "4420" 00:25:03.990 }, 00:25:03.990 "peer_address": { 00:25:03.990 "trtype": "TCP", 00:25:03.990 "adrfam": "IPv4", 00:25:03.990 "traddr": "10.0.0.1", 00:25:03.990 "trsvcid": "51842" 00:25:03.990 }, 00:25:03.990 "auth": { 00:25:03.990 "state": "completed", 00:25:03.990 "digest": "sha256", 00:25:03.990 "dhgroup": "ffdhe2048" 00:25:03.990 } 00:25:03.990 } 00:25:03.990 ]' 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:03.990 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:04.603 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:05.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:05.995 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.255 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.194 00:25:07.194 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:07.194 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:07.194 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:07.765 { 00:25:07.765 "cntlid": 13, 00:25:07.765 "qid": 0, 00:25:07.765 "state": "enabled", 00:25:07.765 "thread": "nvmf_tgt_poll_group_000", 00:25:07.765 "listen_address": { 00:25:07.765 "trtype": "TCP", 00:25:07.765 "adrfam": "IPv4", 00:25:07.765 "traddr": "10.0.0.2", 00:25:07.765 "trsvcid": "4420" 00:25:07.765 }, 00:25:07.765 "peer_address": { 00:25:07.765 "trtype": "TCP", 00:25:07.765 "adrfam": "IPv4", 00:25:07.765 "traddr": "10.0.0.1", 00:25:07.765 "trsvcid": "51872" 00:25:07.765 }, 00:25:07.765 "auth": { 00:25:07.765 "state": "completed", 00:25:07.765 "digest": "sha256", 00:25:07.765 "dhgroup": "ffdhe2048" 00:25:07.765 } 00:25:07.765 } 00:25:07.765 ]' 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:07.765 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:07.765 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:07.765 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:07.765 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:08.705 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:10.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.084 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.084 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:11.024 00:25:11.024 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:11.024 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:11.025 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:11.285 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.285 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:11.285 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.285 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.285 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.285 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:11.285 { 00:25:11.285 "cntlid": 15, 00:25:11.285 "qid": 0, 00:25:11.285 "state": "enabled", 00:25:11.285 "thread": "nvmf_tgt_poll_group_000", 00:25:11.285 "listen_address": { 00:25:11.285 "trtype": "TCP", 00:25:11.285 "adrfam": "IPv4", 00:25:11.285 "traddr": "10.0.0.2", 00:25:11.285 "trsvcid": "4420" 00:25:11.285 }, 00:25:11.285 "peer_address": { 00:25:11.285 "trtype": "TCP", 00:25:11.285 "adrfam": "IPv4", 00:25:11.285 "traddr": "10.0.0.1", 00:25:11.285 "trsvcid": "51916" 00:25:11.285 }, 00:25:11.285 "auth": { 00:25:11.285 "state": "completed", 00:25:11.285 "digest": "sha256", 00:25:11.285 "dhgroup": "ffdhe2048" 00:25:11.285 } 00:25:11.285 } 00:25:11.285 ]' 00:25:11.285 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:11.545 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:11.545 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:11.545 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:11.545 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:11.545 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:11.545 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:11.545 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:12.485 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:13.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.425 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.995 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.255 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.255 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.255 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.513 00:25:14.513 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:14.513 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:14.513 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:15.084 { 00:25:15.084 "cntlid": 17, 00:25:15.084 "qid": 0, 00:25:15.084 "state": "enabled", 00:25:15.084 "thread": "nvmf_tgt_poll_group_000", 00:25:15.084 "listen_address": { 00:25:15.084 "trtype": "TCP", 00:25:15.084 "adrfam": "IPv4", 00:25:15.084 "traddr": "10.0.0.2", 00:25:15.084 "trsvcid": "4420" 00:25:15.084 }, 00:25:15.084 "peer_address": { 00:25:15.084 "trtype": "TCP", 00:25:15.084 "adrfam": "IPv4", 00:25:15.084 "traddr": "10.0.0.1", 00:25:15.084 "trsvcid": "48540" 00:25:15.084 }, 00:25:15.084 "auth": { 00:25:15.084 "state": "completed", 00:25:15.084 "digest": "sha256", 00:25:15.084 "dhgroup": "ffdhe3072" 00:25:15.084 } 00:25:15.084 } 00:25:15.084 ]' 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:15.084 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:15.344 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:15.344 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:15.344 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:15.344 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:15.344 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:15.344 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:15.914 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:17.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:17.296 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.867 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.436 00:25:18.436 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:18.436 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:18.436 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:19.006 { 00:25:19.006 "cntlid": 19, 00:25:19.006 "qid": 0, 00:25:19.006 "state": "enabled", 00:25:19.006 "thread": "nvmf_tgt_poll_group_000", 00:25:19.006 "listen_address": { 00:25:19.006 "trtype": "TCP", 00:25:19.006 "adrfam": "IPv4", 00:25:19.006 "traddr": "10.0.0.2", 00:25:19.006 "trsvcid": "4420" 00:25:19.006 }, 00:25:19.006 "peer_address": { 00:25:19.006 "trtype": "TCP", 00:25:19.006 "adrfam": "IPv4", 00:25:19.006 "traddr": "10.0.0.1", 00:25:19.006 "trsvcid": "48568" 00:25:19.006 }, 00:25:19.006 "auth": { 00:25:19.006 "state": "completed", 00:25:19.006 "digest": "sha256", 00:25:19.006 "dhgroup": "ffdhe3072" 00:25:19.006 } 00:25:19.006 } 00:25:19.006 ]' 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:19.006 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:19.264 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:25:20.645 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:20.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:20.905 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:20.905 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.905 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.905 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.905 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:20.905 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.905 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.166 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.105 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:22.105 { 00:25:22.105 "cntlid": 21, 00:25:22.105 "qid": 0, 00:25:22.105 "state": "enabled", 00:25:22.105 "thread": "nvmf_tgt_poll_group_000", 00:25:22.105 "listen_address": { 00:25:22.105 "trtype": "TCP", 00:25:22.105 "adrfam": "IPv4", 00:25:22.105 "traddr": "10.0.0.2", 00:25:22.105 "trsvcid": "4420" 00:25:22.105 }, 00:25:22.105 "peer_address": { 00:25:22.105 "trtype": "TCP", 00:25:22.105 "adrfam": "IPv4", 00:25:22.105 "traddr": "10.0.0.1", 00:25:22.105 "trsvcid": "37510" 00:25:22.105 }, 00:25:22.105 "auth": { 00:25:22.105 "state": "completed", 00:25:22.105 "digest": "sha256", 00:25:22.105 "dhgroup": "ffdhe3072" 00:25:22.105 } 00:25:22.105 } 00:25:22.105 ]' 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:22.105 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:22.389 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:22.389 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:22.389 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:22.389 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:22.389 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:22.649 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:24.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.557 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:24.819 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:25.388 00:25:25.388 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:25.388 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:25.388 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:25.956 { 00:25:25.956 "cntlid": 23, 00:25:25.956 "qid": 0, 00:25:25.956 "state": "enabled", 00:25:25.956 "thread": "nvmf_tgt_poll_group_000", 00:25:25.956 "listen_address": { 00:25:25.956 "trtype": "TCP", 00:25:25.956 "adrfam": "IPv4", 00:25:25.956 "traddr": "10.0.0.2", 00:25:25.956 "trsvcid": "4420" 00:25:25.956 }, 00:25:25.956 "peer_address": { 00:25:25.956 "trtype": "TCP", 00:25:25.956 "adrfam": "IPv4", 00:25:25.956 "traddr": "10.0.0.1", 00:25:25.956 "trsvcid": "37538" 00:25:25.956 }, 00:25:25.956 "auth": { 00:25:25.956 "state": "completed", 00:25:25.956 "digest": "sha256", 00:25:25.956 "dhgroup": "ffdhe3072" 00:25:25.956 } 00:25:25.956 } 00:25:25.956 ]' 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:25.956 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:26.214 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:26.214 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:26.214 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:26.214 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:26.214 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:26.472 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:28.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.378 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.947 00:25:28.947 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:28.947 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:28.947 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:29.517 { 00:25:29.517 "cntlid": 25, 00:25:29.517 "qid": 0, 00:25:29.517 "state": "enabled", 00:25:29.517 "thread": "nvmf_tgt_poll_group_000", 00:25:29.517 "listen_address": { 00:25:29.517 "trtype": "TCP", 00:25:29.517 "adrfam": "IPv4", 00:25:29.517 "traddr": "10.0.0.2", 00:25:29.517 "trsvcid": "4420" 00:25:29.517 }, 00:25:29.517 "peer_address": { 00:25:29.517 "trtype": "TCP", 00:25:29.517 "adrfam": "IPv4", 00:25:29.517 "traddr": "10.0.0.1", 00:25:29.517 "trsvcid": "37570" 00:25:29.517 }, 00:25:29.517 "auth": { 00:25:29.517 "state": "completed", 00:25:29.517 "digest": "sha256", 00:25:29.517 "dhgroup": "ffdhe4096" 00:25:29.517 } 00:25:29.517 } 00:25:29.517 ]' 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:29.517 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:29.777 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:29.777 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:29.777 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:29.777 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:29.777 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:30.345 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:32.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.253 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.194 00:25:33.194 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:33.194 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:33.194 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:33.453 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.453 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:33.453 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.453 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.453 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.453 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:33.453 { 00:25:33.453 "cntlid": 27, 00:25:33.453 "qid": 0, 00:25:33.453 "state": "enabled", 00:25:33.453 "thread": "nvmf_tgt_poll_group_000", 00:25:33.453 "listen_address": { 00:25:33.453 "trtype": "TCP", 00:25:33.453 "adrfam": "IPv4", 00:25:33.453 "traddr": "10.0.0.2", 00:25:33.453 "trsvcid": "4420" 00:25:33.453 }, 00:25:33.453 "peer_address": { 00:25:33.453 "trtype": "TCP", 00:25:33.453 "adrfam": "IPv4", 00:25:33.453 "traddr": "10.0.0.1", 00:25:33.453 "trsvcid": "60270" 00:25:33.453 }, 00:25:33.453 "auth": { 00:25:33.453 "state": "completed", 00:25:33.453 "digest": "sha256", 00:25:33.453 "dhgroup": "ffdhe4096" 00:25:33.453 } 00:25:33.453 } 00:25:33.453 ]' 00:25:33.453 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:33.713 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:33.713 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:33.713 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:33.713 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:33.713 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:33.713 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:33.714 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:34.283 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:36.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:36.194 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.454 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.397 00:25:37.397 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:37.397 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:37.397 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:37.397 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.397 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:37.397 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.397 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:37.657 { 00:25:37.657 "cntlid": 29, 00:25:37.657 "qid": 0, 00:25:37.657 "state": "enabled", 00:25:37.657 "thread": "nvmf_tgt_poll_group_000", 00:25:37.657 "listen_address": { 00:25:37.657 "trtype": "TCP", 00:25:37.657 "adrfam": "IPv4", 00:25:37.657 "traddr": "10.0.0.2", 00:25:37.657 "trsvcid": "4420" 00:25:37.657 }, 00:25:37.657 "peer_address": { 00:25:37.657 "trtype": "TCP", 00:25:37.657 "adrfam": "IPv4", 00:25:37.657 "traddr": "10.0.0.1", 00:25:37.657 "trsvcid": "60302" 00:25:37.657 }, 00:25:37.657 "auth": { 00:25:37.657 "state": "completed", 00:25:37.657 "digest": "sha256", 00:25:37.657 "dhgroup": "ffdhe4096" 00:25:37.657 } 00:25:37.657 } 00:25:37.657 ]' 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:37.657 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:38.227 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:40.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:40.136 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:40.396 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:41.337 00:25:41.337 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:41.337 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:41.337 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:41.925 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.925 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:41.925 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.925 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.925 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.925 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:41.925 { 00:25:41.925 "cntlid": 31, 00:25:41.925 "qid": 0, 00:25:41.925 "state": "enabled", 00:25:41.925 "thread": "nvmf_tgt_poll_group_000", 00:25:41.925 "listen_address": { 00:25:41.925 "trtype": "TCP", 00:25:41.925 "adrfam": "IPv4", 00:25:41.925 "traddr": "10.0.0.2", 00:25:41.925 "trsvcid": "4420" 00:25:41.925 }, 00:25:41.925 "peer_address": { 00:25:41.925 "trtype": "TCP", 00:25:41.925 "adrfam": "IPv4", 00:25:41.925 "traddr": "10.0.0.1", 00:25:41.925 "trsvcid": "60324" 00:25:41.925 }, 00:25:41.925 "auth": { 00:25:41.925 "state": "completed", 00:25:41.925 "digest": "sha256", 00:25:41.925 "dhgroup": "ffdhe4096" 00:25:41.925 } 00:25:41.925 } 00:25:41.925 ]' 00:25:41.925 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:41.925 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:41.925 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:41.925 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:41.925 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:41.925 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:41.925 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:41.925 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:42.495 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:43.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:43.875 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.443 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.384 00:25:45.384 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:45.384 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:45.384 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:45.953 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.953 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:45.953 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.953 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:45.953 { 00:25:45.953 "cntlid": 33, 00:25:45.953 "qid": 0, 00:25:45.953 "state": "enabled", 00:25:45.953 "thread": "nvmf_tgt_poll_group_000", 00:25:45.953 "listen_address": { 00:25:45.953 "trtype": "TCP", 00:25:45.953 "adrfam": "IPv4", 00:25:45.953 "traddr": "10.0.0.2", 00:25:45.953 "trsvcid": "4420" 00:25:45.953 }, 00:25:45.953 "peer_address": { 00:25:45.953 "trtype": "TCP", 00:25:45.953 "adrfam": "IPv4", 00:25:45.953 "traddr": "10.0.0.1", 00:25:45.953 "trsvcid": "60096" 00:25:45.953 }, 00:25:45.953 "auth": { 00:25:45.953 "state": "completed", 00:25:45.953 "digest": "sha256", 00:25:45.953 "dhgroup": "ffdhe6144" 00:25:45.953 } 00:25:45.953 } 00:25:45.953 ]' 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:45.953 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:46.522 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:47.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.462 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.403 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.972 00:25:49.231 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:49.231 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:49.231 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:49.490 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.490 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:49.490 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.490 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.490 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.490 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:49.490 { 00:25:49.490 "cntlid": 35, 00:25:49.490 "qid": 0, 00:25:49.490 "state": "enabled", 00:25:49.490 "thread": "nvmf_tgt_poll_group_000", 00:25:49.490 "listen_address": { 00:25:49.490 "trtype": "TCP", 00:25:49.490 "adrfam": "IPv4", 00:25:49.490 "traddr": "10.0.0.2", 00:25:49.490 "trsvcid": "4420" 00:25:49.490 }, 00:25:49.490 "peer_address": { 00:25:49.490 "trtype": "TCP", 00:25:49.490 "adrfam": "IPv4", 00:25:49.490 "traddr": "10.0.0.1", 00:25:49.490 "trsvcid": "60116" 00:25:49.490 }, 00:25:49.490 "auth": { 00:25:49.490 "state": "completed", 00:25:49.490 "digest": "sha256", 00:25:49.490 "dhgroup": "ffdhe6144" 00:25:49.490 } 00:25:49.491 } 00:25:49.491 ]' 00:25:49.491 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:49.491 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:49.491 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:49.491 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:49.491 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:49.750 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:49.750 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:49.750 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:50.010 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:51.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:51.392 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.963 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.900 00:25:52.900 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:52.900 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:52.900 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:53.837 { 00:25:53.837 "cntlid": 37, 00:25:53.837 "qid": 0, 00:25:53.837 "state": "enabled", 00:25:53.837 "thread": "nvmf_tgt_poll_group_000", 00:25:53.837 "listen_address": { 00:25:53.837 "trtype": "TCP", 00:25:53.837 "adrfam": "IPv4", 00:25:53.837 "traddr": "10.0.0.2", 00:25:53.837 "trsvcid": "4420" 00:25:53.837 }, 00:25:53.837 "peer_address": { 00:25:53.837 "trtype": "TCP", 00:25:53.837 "adrfam": "IPv4", 00:25:53.837 "traddr": "10.0.0.1", 00:25:53.837 "trsvcid": "41212" 00:25:53.837 }, 00:25:53.837 "auth": { 00:25:53.837 "state": "completed", 00:25:53.837 "digest": "sha256", 00:25:53.837 "dhgroup": "ffdhe6144" 00:25:53.837 } 00:25:53.837 } 00:25:53.837 ]' 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:53.837 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:53.837 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:53.837 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:53.837 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:54.407 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:56.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.314 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:56.574 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:57.529 00:25:57.529 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:57.529 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:57.529 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:58.099 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.099 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:58.099 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.099 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.358 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:58.359 { 00:25:58.359 "cntlid": 39, 00:25:58.359 "qid": 0, 00:25:58.359 "state": "enabled", 00:25:58.359 "thread": "nvmf_tgt_poll_group_000", 00:25:58.359 "listen_address": { 00:25:58.359 "trtype": "TCP", 00:25:58.359 "adrfam": "IPv4", 00:25:58.359 "traddr": "10.0.0.2", 00:25:58.359 "trsvcid": "4420" 00:25:58.359 }, 00:25:58.359 "peer_address": { 00:25:58.359 "trtype": "TCP", 00:25:58.359 "adrfam": "IPv4", 00:25:58.359 "traddr": "10.0.0.1", 00:25:58.359 "trsvcid": "41240" 00:25:58.359 }, 00:25:58.359 "auth": { 00:25:58.359 "state": "completed", 00:25:58.359 "digest": "sha256", 00:25:58.359 "dhgroup": "ffdhe6144" 00:25:58.359 } 00:25:58.359 } 00:25:58.359 ]' 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:58.359 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:58.929 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:26:00.892 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:00.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:00.892 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:00.892 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.892 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:00.892 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.892 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.893 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:00.893 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:00.893 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.152 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.536 00:26:02.536 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:02.536 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:02.536 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:03.108 { 00:26:03.108 "cntlid": 41, 00:26:03.108 "qid": 0, 00:26:03.108 "state": "enabled", 00:26:03.108 "thread": "nvmf_tgt_poll_group_000", 00:26:03.108 "listen_address": { 00:26:03.108 "trtype": "TCP", 00:26:03.108 "adrfam": "IPv4", 00:26:03.108 "traddr": "10.0.0.2", 00:26:03.108 "trsvcid": "4420" 00:26:03.108 }, 00:26:03.108 "peer_address": { 00:26:03.108 "trtype": "TCP", 00:26:03.108 "adrfam": "IPv4", 00:26:03.108 "traddr": "10.0.0.1", 00:26:03.108 "trsvcid": "56446" 00:26:03.108 }, 00:26:03.108 "auth": { 00:26:03.108 "state": "completed", 00:26:03.108 "digest": "sha256", 00:26:03.108 "dhgroup": "ffdhe8192" 00:26:03.108 } 00:26:03.108 } 00:26:03.108 ]' 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:03.108 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:03.109 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:03.109 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:03.109 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:03.678 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:05.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:05.587 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.856 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.235 00:26:07.235 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:07.235 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:07.235 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:07.804 { 00:26:07.804 "cntlid": 43, 00:26:07.804 "qid": 0, 00:26:07.804 "state": "enabled", 00:26:07.804 "thread": "nvmf_tgt_poll_group_000", 00:26:07.804 "listen_address": { 00:26:07.804 "trtype": "TCP", 00:26:07.804 "adrfam": "IPv4", 00:26:07.804 "traddr": "10.0.0.2", 00:26:07.804 "trsvcid": "4420" 00:26:07.804 }, 00:26:07.804 "peer_address": { 00:26:07.804 "trtype": "TCP", 00:26:07.804 "adrfam": "IPv4", 00:26:07.804 "traddr": "10.0.0.1", 00:26:07.804 "trsvcid": "56474" 00:26:07.804 }, 00:26:07.804 "auth": { 00:26:07.804 "state": "completed", 00:26:07.804 "digest": "sha256", 00:26:07.804 "dhgroup": "ffdhe8192" 00:26:07.804 } 00:26:07.804 } 00:26:07.804 ]' 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:07.804 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:07.805 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:07.805 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:07.805 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:08.374 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:09.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.756 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.326 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.327 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.708 00:26:11.708 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:11.708 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:11.708 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:12.279 { 00:26:12.279 "cntlid": 45, 00:26:12.279 "qid": 0, 00:26:12.279 "state": "enabled", 00:26:12.279 "thread": "nvmf_tgt_poll_group_000", 00:26:12.279 "listen_address": { 00:26:12.279 "trtype": "TCP", 00:26:12.279 "adrfam": "IPv4", 00:26:12.279 "traddr": "10.0.0.2", 00:26:12.279 "trsvcid": "4420" 00:26:12.279 }, 00:26:12.279 "peer_address": { 00:26:12.279 "trtype": "TCP", 00:26:12.279 "adrfam": "IPv4", 00:26:12.279 "traddr": "10.0.0.1", 00:26:12.279 "trsvcid": "56510" 00:26:12.279 }, 00:26:12.279 "auth": { 00:26:12.279 "state": "completed", 00:26:12.279 "digest": "sha256", 00:26:12.279 "dhgroup": "ffdhe8192" 00:26:12.279 } 00:26:12.279 } 00:26:12.279 ]' 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:12.279 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:12.540 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:12.540 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:12.540 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:13.109 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:14.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:14.491 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:15.063 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:16.445 00:26:16.445 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:16.445 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:16.445 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:17.014 { 00:26:17.014 "cntlid": 47, 00:26:17.014 "qid": 0, 00:26:17.014 "state": "enabled", 00:26:17.014 "thread": "nvmf_tgt_poll_group_000", 00:26:17.014 "listen_address": { 00:26:17.014 "trtype": "TCP", 00:26:17.014 "adrfam": "IPv4", 00:26:17.014 "traddr": "10.0.0.2", 00:26:17.014 "trsvcid": "4420" 00:26:17.014 }, 00:26:17.014 "peer_address": { 00:26:17.014 "trtype": "TCP", 00:26:17.014 "adrfam": "IPv4", 00:26:17.014 "traddr": "10.0.0.1", 00:26:17.014 "trsvcid": "58486" 00:26:17.014 }, 00:26:17.014 "auth": { 00:26:17.014 "state": "completed", 00:26:17.014 "digest": "sha256", 00:26:17.014 "dhgroup": "ffdhe8192" 00:26:17.014 } 00:26:17.014 } 00:26:17.014 ]' 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:17.014 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:17.273 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:17.273 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:17.273 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:17.273 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:17.274 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:17.843 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:19.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:19.259 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.199 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.199 00:26:20.458 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:20.458 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:20.458 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:20.717 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.717 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:20.718 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.718 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.718 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.718 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:20.718 { 00:26:20.718 "cntlid": 49, 00:26:20.718 "qid": 0, 00:26:20.718 "state": "enabled", 00:26:20.718 "thread": "nvmf_tgt_poll_group_000", 00:26:20.718 "listen_address": { 00:26:20.718 "trtype": "TCP", 00:26:20.718 "adrfam": "IPv4", 00:26:20.718 "traddr": "10.0.0.2", 00:26:20.718 "trsvcid": "4420" 00:26:20.718 }, 00:26:20.718 "peer_address": { 00:26:20.718 "trtype": "TCP", 00:26:20.718 "adrfam": "IPv4", 00:26:20.718 "traddr": "10.0.0.1", 00:26:20.718 "trsvcid": "58500" 00:26:20.718 }, 00:26:20.718 "auth": { 00:26:20.718 "state": "completed", 00:26:20.718 "digest": "sha384", 00:26:20.718 "dhgroup": "null" 00:26:20.718 } 00:26:20.718 } 00:26:20.718 ]' 00:26:20.718 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:20.718 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:20.718 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:20.718 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:20.718 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:20.977 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:20.977 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:20.977 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:21.547 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:22.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:22.927 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.496 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.756 00:26:23.756 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:23.756 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:23.756 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:24.016 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.016 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:24.016 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.016 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:24.277 { 00:26:24.277 "cntlid": 51, 00:26:24.277 "qid": 0, 00:26:24.277 "state": "enabled", 00:26:24.277 "thread": "nvmf_tgt_poll_group_000", 00:26:24.277 "listen_address": { 00:26:24.277 "trtype": "TCP", 00:26:24.277 "adrfam": "IPv4", 00:26:24.277 "traddr": "10.0.0.2", 00:26:24.277 "trsvcid": "4420" 00:26:24.277 }, 00:26:24.277 "peer_address": { 00:26:24.277 "trtype": "TCP", 00:26:24.277 "adrfam": "IPv4", 00:26:24.277 "traddr": "10.0.0.1", 00:26:24.277 "trsvcid": "43378" 00:26:24.277 }, 00:26:24.277 "auth": { 00:26:24.277 "state": "completed", 00:26:24.277 "digest": "sha384", 00:26:24.277 "dhgroup": "null" 00:26:24.277 } 00:26:24.277 } 00:26:24.277 ]' 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:24.277 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:24.847 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:26.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:26.221 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.480 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.049 00:26:27.049 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:27.049 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:27.049 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:27.616 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.616 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:27.616 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.616 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:27.617 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.617 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:27.617 { 00:26:27.617 "cntlid": 53, 00:26:27.617 "qid": 0, 00:26:27.617 "state": "enabled", 00:26:27.617 "thread": "nvmf_tgt_poll_group_000", 00:26:27.617 "listen_address": { 00:26:27.617 "trtype": "TCP", 00:26:27.617 "adrfam": "IPv4", 00:26:27.617 "traddr": "10.0.0.2", 00:26:27.617 "trsvcid": "4420" 00:26:27.617 }, 00:26:27.617 "peer_address": { 00:26:27.617 "trtype": "TCP", 00:26:27.617 "adrfam": "IPv4", 00:26:27.617 "traddr": "10.0.0.1", 00:26:27.617 "trsvcid": "43396" 00:26:27.617 }, 00:26:27.617 "auth": { 00:26:27.617 "state": "completed", 00:26:27.617 "digest": "sha384", 00:26:27.617 "dhgroup": "null" 00:26:27.617 } 00:26:27.617 } 00:26:27.617 ]' 00:26:27.617 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:27.876 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:27.876 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:27.876 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:27.876 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:27.876 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:27.876 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:27.876 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:28.446 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:29.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:29.825 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:30.395 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:30.963 00:26:30.963 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:30.963 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:30.963 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:31.221 { 00:26:31.221 "cntlid": 55, 00:26:31.221 "qid": 0, 00:26:31.221 "state": "enabled", 00:26:31.221 "thread": "nvmf_tgt_poll_group_000", 00:26:31.221 "listen_address": { 00:26:31.221 "trtype": "TCP", 00:26:31.221 "adrfam": "IPv4", 00:26:31.221 "traddr": "10.0.0.2", 00:26:31.221 "trsvcid": "4420" 00:26:31.221 }, 00:26:31.221 "peer_address": { 00:26:31.221 "trtype": "TCP", 00:26:31.221 "adrfam": "IPv4", 00:26:31.221 "traddr": "10.0.0.1", 00:26:31.221 "trsvcid": "43418" 00:26:31.221 }, 00:26:31.221 "auth": { 00:26:31.221 "state": "completed", 00:26:31.221 "digest": "sha384", 00:26:31.221 "dhgroup": "null" 00:26:31.221 } 00:26:31.221 } 00:26:31.221 ]' 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:31.221 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:31.481 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:31.481 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:31.481 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:31.481 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:31.481 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:31.740 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:33.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:33.119 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.687 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.255 00:26:34.255 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:34.255 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:34.255 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:34.821 { 00:26:34.821 "cntlid": 57, 00:26:34.821 "qid": 0, 00:26:34.821 "state": "enabled", 00:26:34.821 "thread": "nvmf_tgt_poll_group_000", 00:26:34.821 "listen_address": { 00:26:34.821 "trtype": "TCP", 00:26:34.821 "adrfam": "IPv4", 00:26:34.821 "traddr": "10.0.0.2", 00:26:34.821 "trsvcid": "4420" 00:26:34.821 }, 00:26:34.821 "peer_address": { 00:26:34.821 "trtype": "TCP", 00:26:34.821 "adrfam": "IPv4", 00:26:34.821 "traddr": "10.0.0.1", 00:26:34.821 "trsvcid": "57892" 00:26:34.821 }, 00:26:34.821 "auth": { 00:26:34.821 "state": "completed", 00:26:34.821 "digest": "sha384", 00:26:34.821 "dhgroup": "ffdhe2048" 00:26:34.821 } 00:26:34.821 } 00:26:34.821 ]' 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:34.821 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:34.821 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:34.821 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:34.821 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:34.821 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:34.821 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:35.078 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:36.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.458 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.718 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.656 00:26:37.656 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:37.656 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:37.656 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:37.915 { 00:26:37.915 "cntlid": 59, 00:26:37.915 "qid": 0, 00:26:37.915 "state": "enabled", 00:26:37.915 "thread": "nvmf_tgt_poll_group_000", 00:26:37.915 "listen_address": { 00:26:37.915 "trtype": "TCP", 00:26:37.915 "adrfam": "IPv4", 00:26:37.915 "traddr": "10.0.0.2", 00:26:37.915 "trsvcid": "4420" 00:26:37.915 }, 00:26:37.915 "peer_address": { 00:26:37.915 "trtype": "TCP", 00:26:37.915 "adrfam": "IPv4", 00:26:37.915 "traddr": "10.0.0.1", 00:26:37.915 "trsvcid": "57904" 00:26:37.915 }, 00:26:37.915 "auth": { 00:26:37.915 "state": "completed", 00:26:37.915 "digest": "sha384", 00:26:37.915 "dhgroup": "ffdhe2048" 00:26:37.915 } 00:26:37.915 } 00:26:37.915 ]' 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:37.915 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:38.175 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:38.175 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:38.175 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:38.176 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:38.176 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:38.745 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:40.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:40.124 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.383 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.321 00:26:41.321 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:41.321 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:41.321 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:41.581 { 00:26:41.581 "cntlid": 61, 00:26:41.581 "qid": 0, 00:26:41.581 "state": "enabled", 00:26:41.581 "thread": "nvmf_tgt_poll_group_000", 00:26:41.581 "listen_address": { 00:26:41.581 "trtype": "TCP", 00:26:41.581 "adrfam": "IPv4", 00:26:41.581 "traddr": "10.0.0.2", 00:26:41.581 "trsvcid": "4420" 00:26:41.581 }, 00:26:41.581 "peer_address": { 00:26:41.581 "trtype": "TCP", 00:26:41.581 "adrfam": "IPv4", 00:26:41.581 "traddr": "10.0.0.1", 00:26:41.581 "trsvcid": "57926" 00:26:41.581 }, 00:26:41.581 "auth": { 00:26:41.581 "state": "completed", 00:26:41.581 "digest": "sha384", 00:26:41.581 "dhgroup": "ffdhe2048" 00:26:41.581 } 00:26:41.581 } 00:26:41.581 ]' 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:41.581 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:41.840 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:41.840 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:41.840 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:42.100 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:43.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.480 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.051 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.311 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.311 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:44.311 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:44.882 00:26:44.882 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:44.882 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:44.882 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:45.451 { 00:26:45.451 "cntlid": 63, 00:26:45.451 "qid": 0, 00:26:45.451 "state": "enabled", 00:26:45.451 "thread": "nvmf_tgt_poll_group_000", 00:26:45.451 "listen_address": { 00:26:45.451 "trtype": "TCP", 00:26:45.451 "adrfam": "IPv4", 00:26:45.451 "traddr": "10.0.0.2", 00:26:45.451 "trsvcid": "4420" 00:26:45.451 }, 00:26:45.451 "peer_address": { 00:26:45.451 "trtype": "TCP", 00:26:45.451 "adrfam": "IPv4", 00:26:45.451 "traddr": "10.0.0.1", 00:26:45.451 "trsvcid": "33090" 00:26:45.451 }, 00:26:45.451 "auth": { 00:26:45.451 "state": "completed", 00:26:45.451 "digest": "sha384", 00:26:45.451 "dhgroup": "ffdhe2048" 00:26:45.451 } 00:26:45.451 } 00:26:45.451 ]' 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:45.451 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:45.709 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:47.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.092 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.662 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:26:47.662 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:47.662 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:47.662 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:47.662 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:47.663 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:47.663 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.663 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.663 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.663 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.663 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.663 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.603 00:26:48.603 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:48.603 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:48.603 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:49.173 { 00:26:49.173 "cntlid": 65, 00:26:49.173 "qid": 0, 00:26:49.173 "state": "enabled", 00:26:49.173 "thread": "nvmf_tgt_poll_group_000", 00:26:49.173 "listen_address": { 00:26:49.173 "trtype": "TCP", 00:26:49.173 "adrfam": "IPv4", 00:26:49.173 "traddr": "10.0.0.2", 00:26:49.173 "trsvcid": "4420" 00:26:49.173 }, 00:26:49.173 "peer_address": { 00:26:49.173 "trtype": "TCP", 00:26:49.173 "adrfam": "IPv4", 00:26:49.173 "traddr": "10.0.0.1", 00:26:49.173 "trsvcid": "33106" 00:26:49.173 }, 00:26:49.173 "auth": { 00:26:49.173 "state": "completed", 00:26:49.173 "digest": "sha384", 00:26:49.173 "dhgroup": "ffdhe3072" 00:26:49.173 } 00:26:49.173 } 00:26:49.173 ]' 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:49.173 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:49.432 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:49.432 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:49.432 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:49.692 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:51.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:51.602 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.863 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.432 00:26:52.432 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:52.432 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:52.432 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:52.692 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.692 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:52.692 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.692 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:52.692 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.692 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:52.692 { 00:26:52.692 "cntlid": 67, 00:26:52.692 "qid": 0, 00:26:52.692 "state": "enabled", 00:26:52.692 "thread": "nvmf_tgt_poll_group_000", 00:26:52.692 "listen_address": { 00:26:52.692 "trtype": "TCP", 00:26:52.692 "adrfam": "IPv4", 00:26:52.692 "traddr": "10.0.0.2", 00:26:52.692 "trsvcid": "4420" 00:26:52.692 }, 00:26:52.692 "peer_address": { 00:26:52.692 "trtype": "TCP", 00:26:52.692 "adrfam": "IPv4", 00:26:52.692 "traddr": "10.0.0.1", 00:26:52.692 "trsvcid": "35902" 00:26:52.692 }, 00:26:52.692 "auth": { 00:26:52.692 "state": "completed", 00:26:52.692 "digest": "sha384", 00:26:52.692 "dhgroup": "ffdhe3072" 00:26:52.692 } 00:26:52.692 } 00:26:52.692 ]' 00:26:52.692 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:52.951 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:52.951 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:52.951 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:52.951 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:52.951 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:52.951 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:52.951 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:53.220 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:54.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:54.610 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.179 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.755 00:26:55.755 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:55.755 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:55.755 23:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:56.328 { 00:26:56.328 "cntlid": 69, 00:26:56.328 "qid": 0, 00:26:56.328 "state": "enabled", 00:26:56.328 "thread": "nvmf_tgt_poll_group_000", 00:26:56.328 "listen_address": { 00:26:56.328 "trtype": "TCP", 00:26:56.328 "adrfam": "IPv4", 00:26:56.328 "traddr": "10.0.0.2", 00:26:56.328 "trsvcid": "4420" 00:26:56.328 }, 00:26:56.328 "peer_address": { 00:26:56.328 "trtype": "TCP", 00:26:56.328 "adrfam": "IPv4", 00:26:56.328 "traddr": "10.0.0.1", 00:26:56.328 "trsvcid": "35928" 00:26:56.328 }, 00:26:56.328 "auth": { 00:26:56.328 "state": "completed", 00:26:56.328 "digest": "sha384", 00:26:56.328 "dhgroup": "ffdhe3072" 00:26:56.328 } 00:26:56.328 } 00:26:56.328 ]' 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:56.328 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:56.896 23:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:58.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:58.272 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:58.532 23:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:59.103 00:26:59.103 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:59.103 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:59.103 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:59.673 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.673 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:59.673 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.673 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:59.673 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.673 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:59.673 { 00:26:59.673 "cntlid": 71, 00:26:59.673 "qid": 0, 00:26:59.673 "state": "enabled", 00:26:59.673 "thread": "nvmf_tgt_poll_group_000", 00:26:59.673 "listen_address": { 00:26:59.673 "trtype": "TCP", 00:26:59.673 "adrfam": "IPv4", 00:26:59.673 "traddr": "10.0.0.2", 00:26:59.673 "trsvcid": "4420" 00:26:59.673 }, 00:26:59.673 "peer_address": { 00:26:59.673 "trtype": "TCP", 00:26:59.673 "adrfam": "IPv4", 00:26:59.673 "traddr": "10.0.0.1", 00:26:59.673 "trsvcid": "35970" 00:26:59.673 }, 00:26:59.673 "auth": { 00:26:59.673 "state": "completed", 00:26:59.673 "digest": "sha384", 00:26:59.673 "dhgroup": "ffdhe3072" 00:26:59.673 } 00:26:59.673 } 00:26:59.673 ]' 00:26:59.673 23:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:59.931 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:59.932 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:59.932 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:59.932 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:59.932 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:59.932 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:59.932 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:00.870 23:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:02.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:02.252 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.511 23:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.079 00:27:03.079 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:03.080 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:03.080 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:03.649 { 00:27:03.649 "cntlid": 73, 00:27:03.649 "qid": 0, 00:27:03.649 "state": "enabled", 00:27:03.649 "thread": "nvmf_tgt_poll_group_000", 00:27:03.649 "listen_address": { 00:27:03.649 "trtype": "TCP", 00:27:03.649 "adrfam": "IPv4", 00:27:03.649 "traddr": "10.0.0.2", 00:27:03.649 "trsvcid": "4420" 00:27:03.649 }, 00:27:03.649 "peer_address": { 00:27:03.649 "trtype": "TCP", 00:27:03.649 "adrfam": "IPv4", 00:27:03.649 "traddr": "10.0.0.1", 00:27:03.649 "trsvcid": "53094" 00:27:03.649 }, 00:27:03.649 "auth": { 00:27:03.649 "state": "completed", 00:27:03.649 "digest": "sha384", 00:27:03.649 "dhgroup": "ffdhe4096" 00:27:03.649 } 00:27:03.649 } 00:27:03.649 ]' 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:03.649 23:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:03.909 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:03.909 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:03.909 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:03.909 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:03.909 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:04.167 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:06.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:06.076 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.336 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.276 00:27:07.276 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:07.276 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:07.276 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:07.844 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:07.844 { 00:27:07.844 "cntlid": 75, 00:27:07.844 "qid": 0, 00:27:07.844 "state": "enabled", 00:27:07.844 "thread": "nvmf_tgt_poll_group_000", 00:27:07.844 "listen_address": { 00:27:07.844 "trtype": "TCP", 00:27:07.844 "adrfam": "IPv4", 00:27:07.844 "traddr": "10.0.0.2", 00:27:07.844 "trsvcid": "4420" 00:27:07.844 }, 00:27:07.844 "peer_address": { 00:27:07.844 "trtype": "TCP", 00:27:07.844 "adrfam": "IPv4", 00:27:07.844 "traddr": "10.0.0.1", 00:27:07.844 "trsvcid": "53120" 00:27:07.844 }, 00:27:07.844 "auth": { 00:27:07.844 "state": "completed", 00:27:07.844 "digest": "sha384", 00:27:07.844 "dhgroup": "ffdhe4096" 00:27:07.844 } 00:27:07.844 } 00:27:07.844 ]' 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:07.844 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:08.103 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:08.103 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:08.103 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:08.668 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:10.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:10.042 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.611 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.557 00:27:11.557 23:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:11.557 23:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:11.557 23:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:12.132 { 00:27:12.132 "cntlid": 77, 00:27:12.132 "qid": 0, 00:27:12.132 "state": "enabled", 00:27:12.132 "thread": "nvmf_tgt_poll_group_000", 00:27:12.132 "listen_address": { 00:27:12.132 "trtype": "TCP", 00:27:12.132 "adrfam": "IPv4", 00:27:12.132 "traddr": "10.0.0.2", 00:27:12.132 "trsvcid": "4420" 00:27:12.132 }, 00:27:12.132 "peer_address": { 00:27:12.132 "trtype": "TCP", 00:27:12.132 "adrfam": "IPv4", 00:27:12.132 "traddr": "10.0.0.1", 00:27:12.132 "trsvcid": "53148" 00:27:12.132 }, 00:27:12.132 "auth": { 00:27:12.132 "state": "completed", 00:27:12.132 "digest": "sha384", 00:27:12.132 "dhgroup": "ffdhe4096" 00:27:12.132 } 00:27:12.132 } 00:27:12.132 ]' 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:12.132 23:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:13.071 23:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:14.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.450 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:14.708 23:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:14.966 00:27:14.966 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:14.966 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:14.966 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:15.225 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.225 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:15.225 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.225 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:15.485 { 00:27:15.485 "cntlid": 79, 00:27:15.485 "qid": 0, 00:27:15.485 "state": "enabled", 00:27:15.485 "thread": "nvmf_tgt_poll_group_000", 00:27:15.485 "listen_address": { 00:27:15.485 "trtype": "TCP", 00:27:15.485 "adrfam": "IPv4", 00:27:15.485 "traddr": "10.0.0.2", 00:27:15.485 "trsvcid": "4420" 00:27:15.485 }, 00:27:15.485 "peer_address": { 00:27:15.485 "trtype": "TCP", 00:27:15.485 "adrfam": "IPv4", 00:27:15.485 "traddr": "10.0.0.1", 00:27:15.485 "trsvcid": "35576" 00:27:15.485 }, 00:27:15.485 "auth": { 00:27:15.485 "state": "completed", 00:27:15.485 "digest": "sha384", 00:27:15.485 "dhgroup": "ffdhe4096" 00:27:15.485 } 00:27:15.485 } 00:27:15.485 ]' 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:15.485 23:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:16.424 23:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:17.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.804 23:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.375 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.316 00:27:19.316 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:19.316 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:19.316 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:19.885 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.885 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:19.885 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.885 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.885 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.885 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:19.885 { 00:27:19.885 "cntlid": 81, 00:27:19.885 "qid": 0, 00:27:19.885 "state": "enabled", 00:27:19.885 "thread": "nvmf_tgt_poll_group_000", 00:27:19.885 "listen_address": { 00:27:19.885 "trtype": "TCP", 00:27:19.885 "adrfam": "IPv4", 00:27:19.885 "traddr": "10.0.0.2", 00:27:19.885 "trsvcid": "4420" 00:27:19.885 }, 00:27:19.885 "peer_address": { 00:27:19.885 "trtype": "TCP", 00:27:19.885 "adrfam": "IPv4", 00:27:19.885 "traddr": "10.0.0.1", 00:27:19.885 "trsvcid": "35598" 00:27:19.885 }, 00:27:19.885 "auth": { 00:27:19.885 "state": "completed", 00:27:19.885 "digest": "sha384", 00:27:19.885 "dhgroup": "ffdhe6144" 00:27:19.885 } 00:27:19.885 } 00:27:19.885 ]' 00:27:19.885 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:20.146 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:20.146 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:20.146 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:20.146 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:20.146 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:20.146 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:20.146 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:20.714 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:27:22.093 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:22.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:22.093 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:22.094 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.094 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.094 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.094 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:22.094 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.094 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.354 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.293 00:27:23.293 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:23.293 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:23.293 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:23.862 { 00:27:23.862 "cntlid": 83, 00:27:23.862 "qid": 0, 00:27:23.862 "state": "enabled", 00:27:23.862 "thread": "nvmf_tgt_poll_group_000", 00:27:23.862 "listen_address": { 00:27:23.862 "trtype": "TCP", 00:27:23.862 "adrfam": "IPv4", 00:27:23.862 "traddr": "10.0.0.2", 00:27:23.862 "trsvcid": "4420" 00:27:23.862 }, 00:27:23.862 "peer_address": { 00:27:23.862 "trtype": "TCP", 00:27:23.862 "adrfam": "IPv4", 00:27:23.862 "traddr": "10.0.0.1", 00:27:23.862 "trsvcid": "56508" 00:27:23.862 }, 00:27:23.862 "auth": { 00:27:23.862 "state": "completed", 00:27:23.862 "digest": "sha384", 00:27:23.862 "dhgroup": "ffdhe6144" 00:27:23.862 } 00:27:23.862 } 00:27:23.862 ]' 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:23.862 23:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:23.862 23:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:23.862 23:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:23.862 23:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:23.862 23:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:23.862 23:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:24.801 23:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:26.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.180 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.439 23:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.376 00:27:27.376 23:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:27.376 23:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:27.376 23:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:27.945 { 00:27:27.945 "cntlid": 85, 00:27:27.945 "qid": 0, 00:27:27.945 "state": "enabled", 00:27:27.945 "thread": "nvmf_tgt_poll_group_000", 00:27:27.945 "listen_address": { 00:27:27.945 "trtype": "TCP", 00:27:27.945 "adrfam": "IPv4", 00:27:27.945 "traddr": "10.0.0.2", 00:27:27.945 "trsvcid": "4420" 00:27:27.945 }, 00:27:27.945 "peer_address": { 00:27:27.945 "trtype": "TCP", 00:27:27.945 "adrfam": "IPv4", 00:27:27.945 "traddr": "10.0.0.1", 00:27:27.945 "trsvcid": "56526" 00:27:27.945 }, 00:27:27.945 "auth": { 00:27:27.945 "state": "completed", 00:27:27.945 "digest": "sha384", 00:27:27.945 "dhgroup": "ffdhe6144" 00:27:27.945 } 00:27:27.945 } 00:27:27.945 ]' 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:27.945 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:28.886 23:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:30.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.291 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:30.551 23:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:31.490 00:27:31.490 23:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:31.490 23:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:31.490 23:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:32.060 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.060 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:32.060 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.060 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.060 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.060 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:32.060 { 00:27:32.060 "cntlid": 87, 00:27:32.060 "qid": 0, 00:27:32.060 "state": "enabled", 00:27:32.060 "thread": "nvmf_tgt_poll_group_000", 00:27:32.060 "listen_address": { 00:27:32.060 "trtype": "TCP", 00:27:32.060 "adrfam": "IPv4", 00:27:32.060 "traddr": "10.0.0.2", 00:27:32.060 "trsvcid": "4420" 00:27:32.060 }, 00:27:32.060 "peer_address": { 00:27:32.060 "trtype": "TCP", 00:27:32.060 "adrfam": "IPv4", 00:27:32.060 "traddr": "10.0.0.1", 00:27:32.060 "trsvcid": "56548" 00:27:32.060 }, 00:27:32.060 "auth": { 00:27:32.060 "state": "completed", 00:27:32.060 "digest": "sha384", 00:27:32.060 "dhgroup": "ffdhe6144" 00:27:32.060 } 00:27:32.060 } 00:27:32.060 ]' 00:27:32.060 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:32.319 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:32.319 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:32.319 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:32.319 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:32.319 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:32.319 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:32.319 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:32.577 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:34.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.484 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.742 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.742 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.742 23:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.678 00:27:35.678 23:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:35.678 23:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:35.678 23:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:36.247 { 00:27:36.247 "cntlid": 89, 00:27:36.247 "qid": 0, 00:27:36.247 "state": "enabled", 00:27:36.247 "thread": "nvmf_tgt_poll_group_000", 00:27:36.247 "listen_address": { 00:27:36.247 "trtype": "TCP", 00:27:36.247 "adrfam": "IPv4", 00:27:36.247 "traddr": "10.0.0.2", 00:27:36.247 "trsvcid": "4420" 00:27:36.247 }, 00:27:36.247 "peer_address": { 00:27:36.247 "trtype": "TCP", 00:27:36.247 "adrfam": "IPv4", 00:27:36.247 "traddr": "10.0.0.1", 00:27:36.247 "trsvcid": "53928" 00:27:36.247 }, 00:27:36.247 "auth": { 00:27:36.247 "state": "completed", 00:27:36.247 "digest": "sha384", 00:27:36.247 "dhgroup": "ffdhe8192" 00:27:36.247 } 00:27:36.247 } 00:27:36.247 ]' 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:36.247 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:36.505 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:36.505 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:36.505 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:36.505 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:36.505 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:36.763 23:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:38.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.142 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.400 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:27:38.400 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:38.400 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:38.400 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:38.400 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:38.400 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:38.400 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.401 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.401 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.401 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.401 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.401 23:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.775 00:27:39.775 23:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:39.775 23:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:39.775 23:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:40.033 { 00:27:40.033 "cntlid": 91, 00:27:40.033 "qid": 0, 00:27:40.033 "state": "enabled", 00:27:40.033 "thread": "nvmf_tgt_poll_group_000", 00:27:40.033 "listen_address": { 00:27:40.033 "trtype": "TCP", 00:27:40.033 "adrfam": "IPv4", 00:27:40.033 "traddr": "10.0.0.2", 00:27:40.033 "trsvcid": "4420" 00:27:40.033 }, 00:27:40.033 "peer_address": { 00:27:40.033 "trtype": "TCP", 00:27:40.033 "adrfam": "IPv4", 00:27:40.033 "traddr": "10.0.0.1", 00:27:40.033 "trsvcid": "53956" 00:27:40.033 }, 00:27:40.033 "auth": { 00:27:40.033 "state": "completed", 00:27:40.033 "digest": "sha384", 00:27:40.033 "dhgroup": "ffdhe8192" 00:27:40.033 } 00:27:40.033 } 00:27:40.033 ]' 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:40.033 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:40.601 23:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:41.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:41.539 23:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.107 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.366 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.366 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.366 23:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.302 00:27:43.302 23:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:43.302 23:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:43.302 23:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:43.871 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.871 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:43.871 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.871 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.871 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.871 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:43.871 { 00:27:43.871 "cntlid": 93, 00:27:43.871 "qid": 0, 00:27:43.871 "state": "enabled", 00:27:43.871 "thread": "nvmf_tgt_poll_group_000", 00:27:43.871 "listen_address": { 00:27:43.871 "trtype": "TCP", 00:27:43.871 "adrfam": "IPv4", 00:27:43.871 "traddr": "10.0.0.2", 00:27:43.871 "trsvcid": "4420" 00:27:43.871 }, 00:27:43.871 "peer_address": { 00:27:43.871 "trtype": "TCP", 00:27:43.871 "adrfam": "IPv4", 00:27:43.871 "traddr": "10.0.0.1", 00:27:43.871 "trsvcid": "56842" 00:27:43.871 }, 00:27:43.871 "auth": { 00:27:43.871 "state": "completed", 00:27:43.871 "digest": "sha384", 00:27:43.871 "dhgroup": "ffdhe8192" 00:27:43.871 } 00:27:43.871 } 00:27:43.871 ]' 00:27:43.871 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:44.129 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:44.129 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:44.129 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:44.129 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:44.129 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:44.129 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:44.129 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:44.387 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:27:45.764 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:46.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:46.022 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:46.022 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.022 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.022 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.022 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:46.022 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.022 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:46.280 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:47.219 00:27:47.219 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:47.219 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:47.219 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:47.795 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.795 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:47.795 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.795 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.795 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.795 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:47.795 { 00:27:47.795 "cntlid": 95, 00:27:47.795 "qid": 0, 00:27:47.795 "state": "enabled", 00:27:47.795 "thread": "nvmf_tgt_poll_group_000", 00:27:47.795 "listen_address": { 00:27:47.795 "trtype": "TCP", 00:27:47.795 "adrfam": "IPv4", 00:27:47.795 "traddr": "10.0.0.2", 00:27:47.795 "trsvcid": "4420" 00:27:47.795 }, 00:27:47.795 "peer_address": { 00:27:47.795 "trtype": "TCP", 00:27:47.795 "adrfam": "IPv4", 00:27:47.795 "traddr": "10.0.0.1", 00:27:47.795 "trsvcid": "56870" 00:27:47.795 }, 00:27:47.795 "auth": { 00:27:47.795 "state": "completed", 00:27:47.795 "digest": "sha384", 00:27:47.795 "dhgroup": "ffdhe8192" 00:27:47.795 } 00:27:47.795 } 00:27:47.795 ]' 00:27:47.795 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:47.795 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:47.795 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:48.059 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:48.059 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:48.059 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:48.059 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:48.059 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:48.317 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:50.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:50.216 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.474 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.040 00:27:51.040 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:51.040 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:51.040 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:51.605 { 00:27:51.605 "cntlid": 97, 00:27:51.605 "qid": 0, 00:27:51.605 "state": "enabled", 00:27:51.605 "thread": "nvmf_tgt_poll_group_000", 00:27:51.605 "listen_address": { 00:27:51.605 "trtype": "TCP", 00:27:51.605 "adrfam": "IPv4", 00:27:51.605 "traddr": "10.0.0.2", 00:27:51.605 "trsvcid": "4420" 00:27:51.605 }, 00:27:51.605 "peer_address": { 00:27:51.605 "trtype": "TCP", 00:27:51.605 "adrfam": "IPv4", 00:27:51.605 "traddr": "10.0.0.1", 00:27:51.605 "trsvcid": "56888" 00:27:51.605 }, 00:27:51.605 "auth": { 00:27:51.605 "state": "completed", 00:27:51.605 "digest": "sha512", 00:27:51.605 "dhgroup": "null" 00:27:51.605 } 00:27:51.605 } 00:27:51.605 ]' 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:51.605 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:51.606 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:51.606 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:51.606 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:51.606 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:51.606 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:51.863 23:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:53.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:53.762 23:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.020 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.586 00:27:54.845 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:54.845 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:54.845 23:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:55.411 { 00:27:55.411 "cntlid": 99, 00:27:55.411 "qid": 0, 00:27:55.411 "state": "enabled", 00:27:55.411 "thread": "nvmf_tgt_poll_group_000", 00:27:55.411 "listen_address": { 00:27:55.411 "trtype": "TCP", 00:27:55.411 "adrfam": "IPv4", 00:27:55.411 "traddr": "10.0.0.2", 00:27:55.411 "trsvcid": "4420" 00:27:55.411 }, 00:27:55.411 "peer_address": { 00:27:55.411 "trtype": "TCP", 00:27:55.411 "adrfam": "IPv4", 00:27:55.411 "traddr": "10.0.0.1", 00:27:55.411 "trsvcid": "33756" 00:27:55.411 }, 00:27:55.411 "auth": { 00:27:55.411 "state": "completed", 00:27:55.411 "digest": "sha512", 00:27:55.411 "dhgroup": "null" 00:27:55.411 } 00:27:55.411 } 00:27:55.411 ]' 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:55.411 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:55.670 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:55.670 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:55.670 23:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:55.928 23:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:57.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:57.302 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.867 23:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.433 00:27:58.433 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:58.433 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:58.433 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:58.691 { 00:27:58.691 "cntlid": 101, 00:27:58.691 "qid": 0, 00:27:58.691 "state": "enabled", 00:27:58.691 "thread": "nvmf_tgt_poll_group_000", 00:27:58.691 "listen_address": { 00:27:58.691 "trtype": "TCP", 00:27:58.691 "adrfam": "IPv4", 00:27:58.691 "traddr": "10.0.0.2", 00:27:58.691 "trsvcid": "4420" 00:27:58.691 }, 00:27:58.691 "peer_address": { 00:27:58.691 "trtype": "TCP", 00:27:58.691 "adrfam": "IPv4", 00:27:58.691 "traddr": "10.0.0.1", 00:27:58.691 "trsvcid": "33782" 00:27:58.691 }, 00:27:58.691 "auth": { 00:27:58.691 "state": "completed", 00:27:58.691 "digest": "sha512", 00:27:58.691 "dhgroup": "null" 00:27:58.691 } 00:27:58.691 } 00:27:58.691 ]' 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:58.691 23:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:58.949 23:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:58.949 23:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:58.949 23:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:58.949 23:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:58.949 23:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:59.515 23:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:00.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:00.889 23:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:00.889 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:01.456 00:28:01.456 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:01.456 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:01.456 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:01.714 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.714 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:01.714 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.714 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.715 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.715 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:01.715 { 00:28:01.715 "cntlid": 103, 00:28:01.715 "qid": 0, 00:28:01.715 "state": "enabled", 00:28:01.715 "thread": "nvmf_tgt_poll_group_000", 00:28:01.715 "listen_address": { 00:28:01.715 "trtype": "TCP", 00:28:01.715 "adrfam": "IPv4", 00:28:01.715 "traddr": "10.0.0.2", 00:28:01.715 "trsvcid": "4420" 00:28:01.715 }, 00:28:01.715 "peer_address": { 00:28:01.715 "trtype": "TCP", 00:28:01.715 "adrfam": "IPv4", 00:28:01.715 "traddr": "10.0.0.1", 00:28:01.715 "trsvcid": "33808" 00:28:01.715 }, 00:28:01.715 "auth": { 00:28:01.715 "state": "completed", 00:28:01.715 "digest": "sha512", 00:28:01.715 "dhgroup": "null" 00:28:01.715 } 00:28:01.715 } 00:28:01.715 ]' 00:28:01.715 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:01.715 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:01.715 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:01.715 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:01.715 23:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:01.973 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:01.973 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:01.973 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:02.231 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:28:03.604 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:03.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:03.604 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:03.604 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.604 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.862 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.862 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.863 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:03.863 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:03.863 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.121 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.379 00:28:04.379 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:04.379 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:04.379 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:04.637 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.637 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:04.637 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.637 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.637 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.637 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:04.637 { 00:28:04.637 "cntlid": 105, 00:28:04.637 "qid": 0, 00:28:04.637 "state": "enabled", 00:28:04.637 "thread": "nvmf_tgt_poll_group_000", 00:28:04.637 "listen_address": { 00:28:04.637 "trtype": "TCP", 00:28:04.637 "adrfam": "IPv4", 00:28:04.637 "traddr": "10.0.0.2", 00:28:04.637 "trsvcid": "4420" 00:28:04.637 }, 00:28:04.637 "peer_address": { 00:28:04.637 "trtype": "TCP", 00:28:04.637 "adrfam": "IPv4", 00:28:04.637 "traddr": "10.0.0.1", 00:28:04.637 "trsvcid": "58102" 00:28:04.637 }, 00:28:04.637 "auth": { 00:28:04.637 "state": "completed", 00:28:04.637 "digest": "sha512", 00:28:04.637 "dhgroup": "ffdhe2048" 00:28:04.637 } 00:28:04.637 } 00:28:04.637 ]' 00:28:04.637 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:04.894 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:04.894 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:04.894 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:04.894 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:04.894 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:04.894 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:04.894 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:05.489 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:28:06.864 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:06.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:06.864 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:06.864 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.864 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.864 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.865 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:06.865 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.865 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.439 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.701 00:28:07.702 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:07.702 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:07.702 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:08.269 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.269 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:08.269 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.269 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.269 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.269 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:08.269 { 00:28:08.269 "cntlid": 107, 00:28:08.269 "qid": 0, 00:28:08.269 "state": "enabled", 00:28:08.269 "thread": "nvmf_tgt_poll_group_000", 00:28:08.269 "listen_address": { 00:28:08.269 "trtype": "TCP", 00:28:08.269 "adrfam": "IPv4", 00:28:08.269 "traddr": "10.0.0.2", 00:28:08.269 "trsvcid": "4420" 00:28:08.269 }, 00:28:08.269 "peer_address": { 00:28:08.269 "trtype": "TCP", 00:28:08.269 "adrfam": "IPv4", 00:28:08.269 "traddr": "10.0.0.1", 00:28:08.269 "trsvcid": "58124" 00:28:08.269 }, 00:28:08.269 "auth": { 00:28:08.269 "state": "completed", 00:28:08.269 "digest": "sha512", 00:28:08.269 "dhgroup": "ffdhe2048" 00:28:08.269 } 00:28:08.269 } 00:28:08.269 ]' 00:28:08.269 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:08.528 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:08.528 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:08.528 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:08.528 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:08.528 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:08.528 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:08.528 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:09.462 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:10.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.835 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.093 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:28:11.093 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:11.093 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:11.093 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:11.093 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:11.093 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:11.093 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.094 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.094 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.094 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.094 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.094 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.660 00:28:11.660 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:11.660 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:11.660 23:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:12.226 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.226 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:12.226 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.226 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.226 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.226 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:12.226 { 00:28:12.226 "cntlid": 109, 00:28:12.226 "qid": 0, 00:28:12.226 "state": "enabled", 00:28:12.226 "thread": "nvmf_tgt_poll_group_000", 00:28:12.226 "listen_address": { 00:28:12.226 "trtype": "TCP", 00:28:12.226 "adrfam": "IPv4", 00:28:12.226 "traddr": "10.0.0.2", 00:28:12.226 "trsvcid": "4420" 00:28:12.226 }, 00:28:12.226 "peer_address": { 00:28:12.226 "trtype": "TCP", 00:28:12.226 "adrfam": "IPv4", 00:28:12.226 "traddr": "10.0.0.1", 00:28:12.226 "trsvcid": "48630" 00:28:12.226 }, 00:28:12.226 "auth": { 00:28:12.226 "state": "completed", 00:28:12.226 "digest": "sha512", 00:28:12.226 "dhgroup": "ffdhe2048" 00:28:12.226 } 00:28:12.226 } 00:28:12.226 ]' 00:28:12.226 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:12.485 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:12.485 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:12.485 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:12.485 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:12.485 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:12.485 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:12.485 23:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:13.052 23:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:14.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.436 23:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:15.006 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:15.577 00:28:15.577 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:15.577 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:15.577 23:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:16.147 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.147 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:16.147 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.147 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.147 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.147 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:16.147 { 00:28:16.147 "cntlid": 111, 00:28:16.147 "qid": 0, 00:28:16.147 "state": "enabled", 00:28:16.147 "thread": "nvmf_tgt_poll_group_000", 00:28:16.147 "listen_address": { 00:28:16.147 "trtype": "TCP", 00:28:16.147 "adrfam": "IPv4", 00:28:16.147 "traddr": "10.0.0.2", 00:28:16.147 "trsvcid": "4420" 00:28:16.147 }, 00:28:16.147 "peer_address": { 00:28:16.147 "trtype": "TCP", 00:28:16.147 "adrfam": "IPv4", 00:28:16.147 "traddr": "10.0.0.1", 00:28:16.147 "trsvcid": "48674" 00:28:16.147 }, 00:28:16.147 "auth": { 00:28:16.147 "state": "completed", 00:28:16.147 "digest": "sha512", 00:28:16.147 "dhgroup": "ffdhe2048" 00:28:16.147 } 00:28:16.147 } 00:28:16.147 ]' 00:28:16.147 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:16.406 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:16.406 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:16.406 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:16.406 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:16.406 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:16.406 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:16.406 23:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:16.975 23:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:18.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:18.885 23:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.145 23:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.715 00:28:19.975 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:19.975 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:19.975 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:20.235 { 00:28:20.235 "cntlid": 113, 00:28:20.235 "qid": 0, 00:28:20.235 "state": "enabled", 00:28:20.235 "thread": "nvmf_tgt_poll_group_000", 00:28:20.235 "listen_address": { 00:28:20.235 "trtype": "TCP", 00:28:20.235 "adrfam": "IPv4", 00:28:20.235 "traddr": "10.0.0.2", 00:28:20.235 "trsvcid": "4420" 00:28:20.235 }, 00:28:20.235 "peer_address": { 00:28:20.235 "trtype": "TCP", 00:28:20.235 "adrfam": "IPv4", 00:28:20.235 "traddr": "10.0.0.1", 00:28:20.235 "trsvcid": "48704" 00:28:20.235 }, 00:28:20.235 "auth": { 00:28:20.235 "state": "completed", 00:28:20.235 "digest": "sha512", 00:28:20.235 "dhgroup": "ffdhe3072" 00:28:20.235 } 00:28:20.235 } 00:28:20.235 ]' 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:20.235 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:20.495 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:20.495 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:20.495 23:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:21.065 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:22.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:22.444 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:23.029 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:28:23.029 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:23.029 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.030 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.305 00:28:23.305 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:23.305 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:23.305 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:24.246 { 00:28:24.246 "cntlid": 115, 00:28:24.246 "qid": 0, 00:28:24.246 "state": "enabled", 00:28:24.246 "thread": "nvmf_tgt_poll_group_000", 00:28:24.246 "listen_address": { 00:28:24.246 "trtype": "TCP", 00:28:24.246 "adrfam": "IPv4", 00:28:24.246 "traddr": "10.0.0.2", 00:28:24.246 "trsvcid": "4420" 00:28:24.246 }, 00:28:24.246 "peer_address": { 00:28:24.246 "trtype": "TCP", 00:28:24.246 "adrfam": "IPv4", 00:28:24.246 "traddr": "10.0.0.1", 00:28:24.246 "trsvcid": "38664" 00:28:24.246 }, 00:28:24.246 "auth": { 00:28:24.246 "state": "completed", 00:28:24.246 "digest": "sha512", 00:28:24.246 "dhgroup": "ffdhe3072" 00:28:24.246 } 00:28:24.246 } 00:28:24.246 ]' 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:24.246 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:24.817 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:26.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:26.199 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.459 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.026 00:28:27.026 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:27.026 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:27.026 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:27.590 { 00:28:27.590 "cntlid": 117, 00:28:27.590 "qid": 0, 00:28:27.590 "state": "enabled", 00:28:27.590 "thread": "nvmf_tgt_poll_group_000", 00:28:27.590 "listen_address": { 00:28:27.590 "trtype": "TCP", 00:28:27.590 "adrfam": "IPv4", 00:28:27.590 "traddr": "10.0.0.2", 00:28:27.590 "trsvcid": "4420" 00:28:27.590 }, 00:28:27.590 "peer_address": { 00:28:27.590 "trtype": "TCP", 00:28:27.590 "adrfam": "IPv4", 00:28:27.590 "traddr": "10.0.0.1", 00:28:27.590 "trsvcid": "38698" 00:28:27.590 }, 00:28:27.590 "auth": { 00:28:27.590 "state": "completed", 00:28:27.590 "digest": "sha512", 00:28:27.590 "dhgroup": "ffdhe3072" 00:28:27.590 } 00:28:27.590 } 00:28:27.590 ]' 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:27.590 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:27.848 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:27.848 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:27.848 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:27.848 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:28.414 23:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:30.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:30.315 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:30.573 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:31.506 00:28:31.506 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:31.506 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:31.506 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:32.073 { 00:28:32.073 "cntlid": 119, 00:28:32.073 "qid": 0, 00:28:32.073 "state": "enabled", 00:28:32.073 "thread": "nvmf_tgt_poll_group_000", 00:28:32.073 "listen_address": { 00:28:32.073 "trtype": "TCP", 00:28:32.073 "adrfam": "IPv4", 00:28:32.073 "traddr": "10.0.0.2", 00:28:32.073 "trsvcid": "4420" 00:28:32.073 }, 00:28:32.073 "peer_address": { 00:28:32.073 "trtype": "TCP", 00:28:32.073 "adrfam": "IPv4", 00:28:32.073 "traddr": "10.0.0.1", 00:28:32.073 "trsvcid": "38718" 00:28:32.073 }, 00:28:32.073 "auth": { 00:28:32.073 "state": "completed", 00:28:32.073 "digest": "sha512", 00:28:32.073 "dhgroup": "ffdhe3072" 00:28:32.073 } 00:28:32.073 } 00:28:32.073 ]' 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:32.073 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:32.331 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:34.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.233 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.234 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.234 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.234 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.800 00:28:34.800 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:34.800 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:34.800 23:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:35.367 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.367 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:35.367 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.367 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.367 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.367 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:35.367 { 00:28:35.367 "cntlid": 121, 00:28:35.367 "qid": 0, 00:28:35.367 "state": "enabled", 00:28:35.367 "thread": "nvmf_tgt_poll_group_000", 00:28:35.367 "listen_address": { 00:28:35.367 "trtype": "TCP", 00:28:35.367 "adrfam": "IPv4", 00:28:35.367 "traddr": "10.0.0.2", 00:28:35.367 "trsvcid": "4420" 00:28:35.367 }, 00:28:35.367 "peer_address": { 00:28:35.367 "trtype": "TCP", 00:28:35.367 "adrfam": "IPv4", 00:28:35.367 "traddr": "10.0.0.1", 00:28:35.367 "trsvcid": "35242" 00:28:35.367 }, 00:28:35.367 "auth": { 00:28:35.367 "state": "completed", 00:28:35.368 "digest": "sha512", 00:28:35.368 "dhgroup": "ffdhe4096" 00:28:35.368 } 00:28:35.368 } 00:28:35.368 ]' 00:28:35.368 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:35.368 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:35.368 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:35.626 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:35.626 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:35.626 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:35.626 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:35.626 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:36.192 23:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:28:37.570 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:37.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:37.828 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:37.828 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.828 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.828 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.828 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:37.828 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.828 23:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.394 23:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.960 00:28:39.219 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:39.219 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:39.219 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:39.784 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.784 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:39.784 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.784 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.784 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.784 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:39.784 { 00:28:39.784 "cntlid": 123, 00:28:39.784 "qid": 0, 00:28:39.784 "state": "enabled", 00:28:39.784 "thread": "nvmf_tgt_poll_group_000", 00:28:39.784 "listen_address": { 00:28:39.784 "trtype": "TCP", 00:28:39.784 "adrfam": "IPv4", 00:28:39.784 "traddr": "10.0.0.2", 00:28:39.784 "trsvcid": "4420" 00:28:39.784 }, 00:28:39.784 "peer_address": { 00:28:39.784 "trtype": "TCP", 00:28:39.784 "adrfam": "IPv4", 00:28:39.784 "traddr": "10.0.0.1", 00:28:39.784 "trsvcid": "35264" 00:28:39.784 }, 00:28:39.784 "auth": { 00:28:39.784 "state": "completed", 00:28:39.784 "digest": "sha512", 00:28:39.784 "dhgroup": "ffdhe4096" 00:28:39.784 } 00:28:39.784 } 00:28:39.784 ]' 00:28:39.784 23:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:39.784 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:39.784 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:39.784 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:40.043 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:40.043 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:40.043 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:40.043 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:40.609 23:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:42.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.004 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.262 23:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.197 00:28:43.197 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:43.197 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:43.197 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:43.763 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.763 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:43.763 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.763 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.763 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.763 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:43.763 { 00:28:43.763 "cntlid": 125, 00:28:43.763 "qid": 0, 00:28:43.763 "state": "enabled", 00:28:43.763 "thread": "nvmf_tgt_poll_group_000", 00:28:43.763 "listen_address": { 00:28:43.763 "trtype": "TCP", 00:28:43.763 "adrfam": "IPv4", 00:28:43.763 "traddr": "10.0.0.2", 00:28:43.763 "trsvcid": "4420" 00:28:43.763 }, 00:28:43.763 "peer_address": { 00:28:43.763 "trtype": "TCP", 00:28:43.763 "adrfam": "IPv4", 00:28:43.763 "traddr": "10.0.0.1", 00:28:43.763 "trsvcid": "42068" 00:28:43.763 }, 00:28:43.764 "auth": { 00:28:43.764 "state": "completed", 00:28:43.764 "digest": "sha512", 00:28:43.764 "dhgroup": "ffdhe4096" 00:28:43.764 } 00:28:43.764 } 00:28:43.764 ]' 00:28:43.764 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:43.764 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:43.764 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:43.764 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:43.764 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:44.021 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:44.021 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:44.021 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:44.587 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:45.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:45.961 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:46.220 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:46.785 00:28:46.785 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:46.785 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:46.785 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:47.351 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.351 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:47.351 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.351 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:47.609 { 00:28:47.609 "cntlid": 127, 00:28:47.609 "qid": 0, 00:28:47.609 "state": "enabled", 00:28:47.609 "thread": "nvmf_tgt_poll_group_000", 00:28:47.609 "listen_address": { 00:28:47.609 "trtype": "TCP", 00:28:47.609 "adrfam": "IPv4", 00:28:47.609 "traddr": "10.0.0.2", 00:28:47.609 "trsvcid": "4420" 00:28:47.609 }, 00:28:47.609 "peer_address": { 00:28:47.609 "trtype": "TCP", 00:28:47.609 "adrfam": "IPv4", 00:28:47.609 "traddr": "10.0.0.1", 00:28:47.609 "trsvcid": "42090" 00:28:47.609 }, 00:28:47.609 "auth": { 00:28:47.609 "state": "completed", 00:28:47.609 "digest": "sha512", 00:28:47.609 "dhgroup": "ffdhe4096" 00:28:47.609 } 00:28:47.609 } 00:28:47.609 ]' 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:47.609 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:47.610 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:47.610 23:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:48.543 23:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:50.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.039 23:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.039 23:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.973 00:28:50.973 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:50.973 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:50.973 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:51.231 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.231 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:51.231 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.231 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:51.489 { 00:28:51.489 "cntlid": 129, 00:28:51.489 "qid": 0, 00:28:51.489 "state": "enabled", 00:28:51.489 "thread": "nvmf_tgt_poll_group_000", 00:28:51.489 "listen_address": { 00:28:51.489 "trtype": "TCP", 00:28:51.489 "adrfam": "IPv4", 00:28:51.489 "traddr": "10.0.0.2", 00:28:51.489 "trsvcid": "4420" 00:28:51.489 }, 00:28:51.489 "peer_address": { 00:28:51.489 "trtype": "TCP", 00:28:51.489 "adrfam": "IPv4", 00:28:51.489 "traddr": "10.0.0.1", 00:28:51.489 "trsvcid": "42106" 00:28:51.489 }, 00:28:51.489 "auth": { 00:28:51.489 "state": "completed", 00:28:51.489 "digest": "sha512", 00:28:51.489 "dhgroup": "ffdhe6144" 00:28:51.489 } 00:28:51.489 } 00:28:51.489 ]' 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:51.489 23:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:52.056 23:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:53.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:53.958 23:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.958 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.959 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.959 23:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.894 00:28:54.894 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:54.894 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:54.894 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:55.830 { 00:28:55.830 "cntlid": 131, 00:28:55.830 "qid": 0, 00:28:55.830 "state": "enabled", 00:28:55.830 "thread": "nvmf_tgt_poll_group_000", 00:28:55.830 "listen_address": { 00:28:55.830 "trtype": "TCP", 00:28:55.830 "adrfam": "IPv4", 00:28:55.830 "traddr": "10.0.0.2", 00:28:55.830 "trsvcid": "4420" 00:28:55.830 }, 00:28:55.830 "peer_address": { 00:28:55.830 "trtype": "TCP", 00:28:55.830 "adrfam": "IPv4", 00:28:55.830 "traddr": "10.0.0.1", 00:28:55.830 "trsvcid": "57824" 00:28:55.830 }, 00:28:55.830 "auth": { 00:28:55.830 "state": "completed", 00:28:55.830 "digest": "sha512", 00:28:55.830 "dhgroup": "ffdhe6144" 00:28:55.830 } 00:28:55.830 } 00:28:55.830 ]' 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:55.830 23:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:55.830 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:55.830 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:55.830 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:56.397 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:58.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:58.299 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:58.558 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:28:58.558 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:58.558 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:58.558 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:58.558 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:58.558 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:58.559 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.559 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.559 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.559 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.559 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.559 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:59.495 00:28:59.495 23:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:59.495 23:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:59.495 23:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:00.093 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.093 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:00.093 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.093 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.093 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.093 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:00.093 { 00:29:00.093 "cntlid": 133, 00:29:00.093 "qid": 0, 00:29:00.093 "state": "enabled", 00:29:00.093 "thread": "nvmf_tgt_poll_group_000", 00:29:00.093 "listen_address": { 00:29:00.093 "trtype": "TCP", 00:29:00.093 "adrfam": "IPv4", 00:29:00.093 "traddr": "10.0.0.2", 00:29:00.093 "trsvcid": "4420" 00:29:00.093 }, 00:29:00.093 "peer_address": { 00:29:00.093 "trtype": "TCP", 00:29:00.093 "adrfam": "IPv4", 00:29:00.093 "traddr": "10.0.0.1", 00:29:00.093 "trsvcid": "57836" 00:29:00.093 }, 00:29:00.093 "auth": { 00:29:00.093 "state": "completed", 00:29:00.093 "digest": "sha512", 00:29:00.093 "dhgroup": "ffdhe6144" 00:29:00.093 } 00:29:00.093 } 00:29:00.093 ]' 00:29:00.093 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:00.352 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:00.352 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:00.352 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:00.352 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:00.352 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:00.352 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:00.352 23:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:01.287 23:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:29:02.660 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:02.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:02.661 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:02.661 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.661 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.661 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.661 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:02.661 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:02.661 23:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:03.227 23:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:03.794 00:29:04.053 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:04.053 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:04.053 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:04.620 { 00:29:04.620 "cntlid": 135, 00:29:04.620 "qid": 0, 00:29:04.620 "state": "enabled", 00:29:04.620 "thread": "nvmf_tgt_poll_group_000", 00:29:04.620 "listen_address": { 00:29:04.620 "trtype": "TCP", 00:29:04.620 "adrfam": "IPv4", 00:29:04.620 "traddr": "10.0.0.2", 00:29:04.620 "trsvcid": "4420" 00:29:04.620 }, 00:29:04.620 "peer_address": { 00:29:04.620 "trtype": "TCP", 00:29:04.620 "adrfam": "IPv4", 00:29:04.620 "traddr": "10.0.0.1", 00:29:04.620 "trsvcid": "44056" 00:29:04.620 }, 00:29:04.620 "auth": { 00:29:04.620 "state": "completed", 00:29:04.620 "digest": "sha512", 00:29:04.620 "dhgroup": "ffdhe6144" 00:29:04.620 } 00:29:04.620 } 00:29:04.620 ]' 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:04.620 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:04.879 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:04.879 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:04.879 23:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:05.446 23:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:06.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:06.822 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.400 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.401 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.401 23:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:08.776 00:29:09.034 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:09.034 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:09.034 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:09.600 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.600 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:09.600 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.600 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.600 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.600 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:09.600 { 00:29:09.600 "cntlid": 137, 00:29:09.600 "qid": 0, 00:29:09.601 "state": "enabled", 00:29:09.601 "thread": "nvmf_tgt_poll_group_000", 00:29:09.601 "listen_address": { 00:29:09.601 "trtype": "TCP", 00:29:09.601 "adrfam": "IPv4", 00:29:09.601 "traddr": "10.0.0.2", 00:29:09.601 "trsvcid": "4420" 00:29:09.601 }, 00:29:09.601 "peer_address": { 00:29:09.601 "trtype": "TCP", 00:29:09.601 "adrfam": "IPv4", 00:29:09.601 "traddr": "10.0.0.1", 00:29:09.601 "trsvcid": "44080" 00:29:09.601 }, 00:29:09.601 "auth": { 00:29:09.601 "state": "completed", 00:29:09.601 "digest": "sha512", 00:29:09.601 "dhgroup": "ffdhe8192" 00:29:09.601 } 00:29:09.601 } 00:29:09.601 ]' 00:29:09.601 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:09.601 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:09.601 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:09.601 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:09.601 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:09.859 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:09.859 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:09.859 23:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:10.118 23:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:12.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:12.018 23:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:12.277 23:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:13.651 00:29:13.651 23:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:13.652 23:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:13.652 23:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:13.910 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.910 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:13.910 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.910 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.910 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.910 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:13.910 { 00:29:13.910 "cntlid": 139, 00:29:13.910 "qid": 0, 00:29:13.910 "state": "enabled", 00:29:13.910 "thread": "nvmf_tgt_poll_group_000", 00:29:13.910 "listen_address": { 00:29:13.910 "trtype": "TCP", 00:29:13.910 "adrfam": "IPv4", 00:29:13.910 "traddr": "10.0.0.2", 00:29:13.910 "trsvcid": "4420" 00:29:13.910 }, 00:29:13.910 "peer_address": { 00:29:13.910 "trtype": "TCP", 00:29:13.910 "adrfam": "IPv4", 00:29:13.910 "traddr": "10.0.0.1", 00:29:13.910 "trsvcid": "50154" 00:29:13.910 }, 00:29:13.910 "auth": { 00:29:13.910 "state": "completed", 00:29:13.910 "digest": "sha512", 00:29:13.910 "dhgroup": "ffdhe8192" 00:29:13.910 } 00:29:13.910 } 00:29:13.910 ]' 00:29:13.910 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:14.168 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:14.168 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:14.168 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:14.168 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:14.168 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:14.168 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:14.168 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:14.734 23:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:OWRiZjlhNGU5NjkzZTU4ZmUwMWM5Mjc2NDg0NmQwNmUAXz8k: --dhchap-ctrl-secret DHHC-1:02:NGQ0MzFiZGY4ZmQ3OWY3OWRkM2U0NjI0MWNiMGNjZTZiMGQ2NWRhNDRhYjE0MzRmYZh7aw==: 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:16.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:16.635 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.007 00:29:18.007 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:18.007 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:18.007 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:18.273 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.273 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:18.273 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.273 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.273 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.273 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:18.273 { 00:29:18.273 "cntlid": 141, 00:29:18.273 "qid": 0, 00:29:18.273 "state": "enabled", 00:29:18.273 "thread": "nvmf_tgt_poll_group_000", 00:29:18.273 "listen_address": { 00:29:18.273 "trtype": "TCP", 00:29:18.273 "adrfam": "IPv4", 00:29:18.273 "traddr": "10.0.0.2", 00:29:18.273 "trsvcid": "4420" 00:29:18.273 }, 00:29:18.273 "peer_address": { 00:29:18.273 "trtype": "TCP", 00:29:18.273 "adrfam": "IPv4", 00:29:18.273 "traddr": "10.0.0.1", 00:29:18.273 "trsvcid": "50182" 00:29:18.273 }, 00:29:18.273 "auth": { 00:29:18.273 "state": "completed", 00:29:18.273 "digest": "sha512", 00:29:18.273 "dhgroup": "ffdhe8192" 00:29:18.273 } 00:29:18.273 } 00:29:18.273 ]' 00:29:18.273 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:18.539 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:18.539 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:18.539 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:18.539 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:18.539 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:18.539 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:18.539 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:19.474 23:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZTU0N2JmOTUxY2U0YTViNTZiZDhjZDgzMjAzNmQ1NDBiYmNhNTg0YjkxMmEyZjBhPuFgZg==: --dhchap-ctrl-secret DHHC-1:01:OTk5YjkzY2Q4ZWY0MDBmOTdjZTI2NGFkZGNhNjVjYzZzg+pa: 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:20.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:20.848 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.107 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.365 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.365 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:21.365 23:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:22.738 00:29:22.738 23:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:22.738 23:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:22.738 23:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:23.304 { 00:29:23.304 "cntlid": 143, 00:29:23.304 "qid": 0, 00:29:23.304 "state": "enabled", 00:29:23.304 "thread": "nvmf_tgt_poll_group_000", 00:29:23.304 "listen_address": { 00:29:23.304 "trtype": "TCP", 00:29:23.304 "adrfam": "IPv4", 00:29:23.304 "traddr": "10.0.0.2", 00:29:23.304 "trsvcid": "4420" 00:29:23.304 }, 00:29:23.304 "peer_address": { 00:29:23.304 "trtype": "TCP", 00:29:23.304 "adrfam": "IPv4", 00:29:23.304 "traddr": "10.0.0.1", 00:29:23.304 "trsvcid": "41132" 00:29:23.304 }, 00:29:23.304 "auth": { 00:29:23.304 "state": "completed", 00:29:23.304 "digest": "sha512", 00:29:23.304 "dhgroup": "ffdhe8192" 00:29:23.304 } 00:29:23.304 } 00:29:23.304 ]' 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:23.304 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:23.561 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:23.561 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:23.561 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:24.128 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:25.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:25.501 23:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.759 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.131 00:29:27.390 23:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:27.390 23:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:27.390 23:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:27.956 { 00:29:27.956 "cntlid": 145, 00:29:27.956 "qid": 0, 00:29:27.956 "state": "enabled", 00:29:27.956 "thread": "nvmf_tgt_poll_group_000", 00:29:27.956 "listen_address": { 00:29:27.956 "trtype": "TCP", 00:29:27.956 "adrfam": "IPv4", 00:29:27.956 "traddr": "10.0.0.2", 00:29:27.956 "trsvcid": "4420" 00:29:27.956 }, 00:29:27.956 "peer_address": { 00:29:27.956 "trtype": "TCP", 00:29:27.956 "adrfam": "IPv4", 00:29:27.956 "traddr": "10.0.0.1", 00:29:27.956 "trsvcid": "41160" 00:29:27.956 }, 00:29:27.956 "auth": { 00:29:27.956 "state": "completed", 00:29:27.956 "digest": "sha512", 00:29:27.956 "dhgroup": "ffdhe8192" 00:29:27.956 } 00:29:27.956 } 00:29:27.956 ]' 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:27.956 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:28.890 23:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MzYxMzkyODExYTViZmMxZDgzYjY5MTkxNTQ0ZDAxMGM2ODI4NjZhMjYxNWZkZjdjz0d4Gg==: --dhchap-ctrl-secret DHHC-1:03:M2YwNmViY2UwZWJmYzMyYjZmODA3OTg2MmRhYTM0NDFiYTFjZjkzN2Y5MTE4ZmFhM2M5MmQ1MTFiZmZiNTc3NYNfCVU=: 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:30.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:30.265 23:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:31.199 request: 00:29:31.199 { 00:29:31.199 "name": "nvme0", 00:29:31.199 "trtype": "tcp", 00:29:31.199 "traddr": "10.0.0.2", 00:29:31.199 "adrfam": "ipv4", 00:29:31.199 "trsvcid": "4420", 00:29:31.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:31.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:29:31.199 "prchk_reftag": false, 00:29:31.199 "prchk_guard": false, 00:29:31.199 "hdgst": false, 00:29:31.199 "ddgst": false, 00:29:31.199 "dhchap_key": "key2", 00:29:31.199 "method": "bdev_nvme_attach_controller", 00:29:31.199 "req_id": 1 00:29:31.199 } 00:29:31.199 Got JSON-RPC error response 00:29:31.199 response: 00:29:31.199 { 00:29:31.199 "code": -5, 00:29:31.199 "message": "Input/output error" 00:29:31.199 } 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:31.458 23:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:32.832 request: 00:29:32.832 { 00:29:32.832 "name": "nvme0", 00:29:32.832 "trtype": "tcp", 00:29:32.832 "traddr": "10.0.0.2", 00:29:32.832 "adrfam": "ipv4", 00:29:32.832 "trsvcid": "4420", 00:29:32.832 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:32.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:29:32.832 "prchk_reftag": false, 00:29:32.832 "prchk_guard": false, 00:29:32.832 "hdgst": false, 00:29:32.832 "ddgst": false, 00:29:32.832 "dhchap_key": "key1", 00:29:32.832 "dhchap_ctrlr_key": "ckey2", 00:29:32.832 "method": "bdev_nvme_attach_controller", 00:29:32.832 "req_id": 1 00:29:32.832 } 00:29:32.832 Got JSON-RPC error response 00:29:32.832 response: 00:29:32.832 { 00:29:32.832 "code": -5, 00:29:32.832 "message": "Input/output error" 00:29:32.832 } 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.832 23:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.767 request: 00:29:33.767 { 00:29:33.767 "name": "nvme0", 00:29:33.767 "trtype": "tcp", 00:29:33.767 "traddr": "10.0.0.2", 00:29:33.767 "adrfam": "ipv4", 00:29:33.767 "trsvcid": "4420", 00:29:33.767 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:33.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:29:33.767 "prchk_reftag": false, 00:29:33.767 "prchk_guard": false, 00:29:33.767 "hdgst": false, 00:29:33.767 "ddgst": false, 00:29:33.767 "dhchap_key": "key1", 00:29:33.767 "dhchap_ctrlr_key": "ckey1", 00:29:33.767 "method": "bdev_nvme_attach_controller", 00:29:33.767 "req_id": 1 00:29:33.767 } 00:29:33.767 Got JSON-RPC error response 00:29:33.767 response: 00:29:33.767 { 00:29:33.767 "code": -5, 00:29:33.767 "message": "Input/output error" 00:29:33.767 } 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 897108 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 897108 ']' 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 897108 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 897108 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 897108' 00:29:33.767 killing process with pid 897108 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 897108 00:29:33.767 23:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 897108 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=931527 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 931527 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 931527 ']' 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.026 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 931527 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 931527 ']' 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.593 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.159 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.159 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:29:35.159 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:29:35.160 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.160 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:35.418 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:36.803 00:29:36.803 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:36.803 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:36.803 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:37.377 { 00:29:37.377 "cntlid": 1, 00:29:37.377 "qid": 0, 00:29:37.377 "state": "enabled", 00:29:37.377 "thread": "nvmf_tgt_poll_group_000", 00:29:37.377 "listen_address": { 00:29:37.377 "trtype": "TCP", 00:29:37.377 "adrfam": "IPv4", 00:29:37.377 "traddr": "10.0.0.2", 00:29:37.377 "trsvcid": "4420" 00:29:37.377 }, 00:29:37.377 "peer_address": { 00:29:37.377 "trtype": "TCP", 00:29:37.377 "adrfam": "IPv4", 00:29:37.377 "traddr": "10.0.0.1", 00:29:37.377 "trsvcid": "35266" 00:29:37.377 }, 00:29:37.377 "auth": { 00:29:37.377 "state": "completed", 00:29:37.377 "digest": "sha512", 00:29:37.377 "dhgroup": "ffdhe8192" 00:29:37.377 } 00:29:37.377 } 00:29:37.377 ]' 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:37.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:37.635 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:37.635 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:37.635 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:37.635 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:37.635 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:38.201 23:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:YTY3ZTM4MDEyYjIwZWNhYzY4OTEyYjQyMGQ5NTI5NTI0YzliZTA3MGFmYzZhMDRmOGIwMjcyYjNjNjg2YzFiZXR3Dlw=: 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:40.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:29:40.100 23:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:40.358 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:40.924 request: 00:29:40.924 { 00:29:40.924 "name": "nvme0", 00:29:40.924 "trtype": "tcp", 00:29:40.924 "traddr": "10.0.0.2", 00:29:40.924 "adrfam": "ipv4", 00:29:40.924 "trsvcid": "4420", 00:29:40.924 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:40.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:29:40.924 "prchk_reftag": false, 00:29:40.924 "prchk_guard": false, 00:29:40.924 "hdgst": false, 00:29:40.924 "ddgst": false, 00:29:40.924 "dhchap_key": "key3", 00:29:40.924 "method": "bdev_nvme_attach_controller", 00:29:40.924 "req_id": 1 00:29:40.924 } 00:29:40.924 Got JSON-RPC error response 00:29:40.924 response: 00:29:40.924 { 00:29:40.924 "code": -5, 00:29:40.924 "message": "Input/output error" 00:29:40.924 } 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:29:40.924 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:29:41.489 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:41.489 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:41.489 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:41.489 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:41.490 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.490 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:41.490 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.490 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:41.490 23:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:42.423 request: 00:29:42.423 { 00:29:42.423 "name": "nvme0", 00:29:42.423 "trtype": "tcp", 00:29:42.423 "traddr": "10.0.0.2", 00:29:42.423 "adrfam": "ipv4", 00:29:42.423 "trsvcid": "4420", 00:29:42.423 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:42.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:29:42.423 "prchk_reftag": false, 00:29:42.423 "prchk_guard": false, 00:29:42.423 "hdgst": false, 00:29:42.423 "ddgst": false, 00:29:42.423 "dhchap_key": "key3", 00:29:42.423 "method": "bdev_nvme_attach_controller", 00:29:42.423 "req_id": 1 00:29:42.423 } 00:29:42.423 Got JSON-RPC error response 00:29:42.423 response: 00:29:42.423 { 00:29:42.423 "code": -5, 00:29:42.423 "message": "Input/output error" 00:29:42.423 } 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.423 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.682 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:42.682 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.682 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.939 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.939 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:42.939 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.939 23:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:42.940 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:43.506 request: 00:29:43.506 { 00:29:43.506 "name": "nvme0", 00:29:43.506 "trtype": "tcp", 00:29:43.506 "traddr": "10.0.0.2", 00:29:43.506 "adrfam": "ipv4", 00:29:43.506 "trsvcid": "4420", 00:29:43.506 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:43.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:29:43.506 "prchk_reftag": false, 00:29:43.506 "prchk_guard": false, 00:29:43.506 "hdgst": false, 00:29:43.506 "ddgst": false, 00:29:43.506 "dhchap_key": "key0", 00:29:43.506 "dhchap_ctrlr_key": "key1", 00:29:43.506 "method": "bdev_nvme_attach_controller", 00:29:43.506 "req_id": 1 00:29:43.506 } 00:29:43.506 Got JSON-RPC error response 00:29:43.506 response: 00:29:43.506 { 00:29:43.506 "code": -5, 00:29:43.506 "message": "Input/output error" 00:29:43.506 } 00:29:43.506 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:43.506 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.506 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.506 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.506 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:43.506 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:43.764 00:29:43.764 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:29:43.764 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:29:43.764 23:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:44.330 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.330 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:44.330 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 897139 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 897139 ']' 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 897139 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 897139 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 897139' 00:29:44.897 killing process with pid 897139 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 897139 00:29:44.897 23:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 897139 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.467 rmmod nvme_tcp 00:29:45.467 rmmod nvme_fabrics 00:29:45.467 rmmod nvme_keyring 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 931527 ']' 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 931527 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 931527 ']' 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 931527 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 931527 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 931527' 00:29:45.467 killing process with pid 931527 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 931527 00:29:45.467 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 931527 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.727 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.268 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:48.268 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.guh /tmp/spdk.key-sha256.del /tmp/spdk.key-sha384.ExD /tmp/spdk.key-sha512.GFO /tmp/spdk.key-sha512.ljk /tmp/spdk.key-sha384.jsj /tmp/spdk.key-sha256.PuN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:29:48.268 00:29:48.268 real 5m12.021s 00:29:48.268 user 12m31.395s 00:29:48.268 sys 0m41.677s 00:29:48.268 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:48.268 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.268 ************************************ 00:29:48.268 END TEST nvmf_auth_target 00:29:48.268 ************************************ 00:29:48.268 23:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:29:48.268 23:10:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:29:48.268 23:10:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:29:48.268 23:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:48.268 23:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.268 23:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:48.268 ************************************ 00:29:48.268 START TEST nvmf_bdevio_no_huge 00:29:48.268 ************************************ 00:29:48.268 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:29:48.269 * Looking for test storage... 00:29:48.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.269 23:10:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:51.609 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:51.609 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:51.609 Found net devices under 0000:84:00.0: cvl_0_0 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:51.609 Found net devices under 0000:84:00.1: cvl_0_1 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.609 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:51.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:29:51.609 00:29:51.609 --- 10.0.0.2 ping statistics --- 00:29:51.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.610 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:29:51.610 00:29:51.610 --- 10.0.0.1 ping statistics --- 00:29:51.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.610 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=934851 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 934851 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 934851 ']' 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.610 23:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:51.610 [2024-07-22 23:10:27.657850] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:29:51.610 [2024-07-22 23:10:27.658037] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:29:51.610 [2024-07-22 23:10:27.828480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.870 [2024-07-22 23:10:28.024537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.870 [2024-07-22 23:10:28.024639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.870 [2024-07-22 23:10:28.024675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.870 [2024-07-22 23:10:28.024719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.870 [2024-07-22 23:10:28.024740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.870 [2024-07-22 23:10:28.024875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.870 [2024-07-22 23:10:28.024954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:51.870 [2024-07-22 23:10:28.025024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:51.870 [2024-07-22 23:10:28.025028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:52.130 [2024-07-22 23:10:28.250505] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.130 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:52.131 Malloc0 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:52.131 [2024-07-22 23:10:28.324170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.131 { 00:29:52.131 "params": { 00:29:52.131 "name": "Nvme$subsystem", 00:29:52.131 "trtype": "$TEST_TRANSPORT", 00:29:52.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.131 "adrfam": "ipv4", 00:29:52.131 "trsvcid": "$NVMF_PORT", 00:29:52.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.131 "hdgst": ${hdgst:-false}, 00:29:52.131 "ddgst": ${ddgst:-false} 00:29:52.131 }, 00:29:52.131 "method": "bdev_nvme_attach_controller" 00:29:52.131 } 00:29:52.131 EOF 00:29:52.131 )") 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:29:52.131 23:10:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:52.131 "params": { 00:29:52.131 "name": "Nvme1", 00:29:52.131 "trtype": "tcp", 00:29:52.131 "traddr": "10.0.0.2", 00:29:52.131 "adrfam": "ipv4", 00:29:52.131 "trsvcid": "4420", 00:29:52.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.131 "hdgst": false, 00:29:52.131 "ddgst": false 00:29:52.131 }, 00:29:52.131 "method": "bdev_nvme_attach_controller" 00:29:52.131 }' 00:29:52.131 [2024-07-22 23:10:28.377085] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:29:52.131 [2024-07-22 23:10:28.377186] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid934942 ] 00:29:52.390 [2024-07-22 23:10:28.467523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:52.390 [2024-07-22 23:10:28.637883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.390 [2024-07-22 23:10:28.637943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.390 [2024-07-22 23:10:28.637947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.957 I/O targets: 00:29:52.957 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:52.957 00:29:52.957 00:29:52.957 CUnit - A unit testing framework for C - Version 2.1-3 00:29:52.957 http://cunit.sourceforge.net/ 00:29:52.957 00:29:52.957 00:29:52.957 Suite: bdevio tests on: Nvme1n1 00:29:52.957 Test: blockdev write read block ...passed 00:29:52.957 Test: blockdev write zeroes read block ...passed 00:29:52.957 Test: blockdev write zeroes read no split ...passed 00:29:52.957 Test: blockdev write zeroes read split ...passed 00:29:52.957 Test: blockdev write zeroes read split partial ...passed 00:29:52.957 Test: blockdev reset ...[2024-07-22 23:10:29.116582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.957 [2024-07-22 23:10:29.116817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113d900 (9): Bad file descriptor 00:29:52.957 [2024-07-22 23:10:29.211205] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:52.957 passed 00:29:52.957 Test: blockdev write read 8 blocks ...passed 00:29:52.957 Test: blockdev write read size > 128k ...passed 00:29:52.957 Test: blockdev write read invalid size ...passed 00:29:52.957 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:52.957 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:52.957 Test: blockdev write read max offset ...passed 00:29:53.215 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:53.215 Test: blockdev writev readv 8 blocks ...passed 00:29:53.215 Test: blockdev writev readv 30 x 1block ...passed 00:29:53.215 Test: blockdev writev readv block ...passed 00:29:53.215 Test: blockdev writev readv size > 128k ...passed 00:29:53.215 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:53.215 Test: blockdev comparev and writev ...[2024-07-22 23:10:29.431177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.215 [2024-07-22 23:10:29.431262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.215 [2024-07-22 23:10:29.431357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.215 [2024-07-22 23:10:29.431410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:53.215 [2024-07-22 23:10:29.432011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.215 [2024-07-22 23:10:29.432080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:53.215 [2024-07-22 23:10:29.432123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.215 [2024-07-22 23:10:29.432154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:53.215 [2024-07-22 23:10:29.432766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.216 [2024-07-22 23:10:29.432827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:53.216 [2024-07-22 23:10:29.432900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.216 [2024-07-22 23:10:29.432932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:53.216 [2024-07-22 23:10:29.433585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.216 [2024-07-22 23:10:29.433651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:53.216 [2024-07-22 23:10:29.433724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:53.216 [2024-07-22 23:10:29.433755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:53.216 passed 00:29:53.216 Test: blockdev nvme passthru rw ...passed 00:29:53.216 Test: blockdev nvme passthru vendor specific ...[2024-07-22 23:10:29.515673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.216 [2024-07-22 23:10:29.515711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:53.216 [2024-07-22 23:10:29.516038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.216 [2024-07-22 23:10:29.516082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:53.216 [2024-07-22 23:10:29.516393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.216 [2024-07-22 23:10:29.516426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:53.216 [2024-07-22 23:10:29.516711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.216 [2024-07-22 23:10:29.516755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:53.216 passed 00:29:53.474 Test: blockdev nvme admin passthru ...passed 00:29:53.474 Test: blockdev copy ...passed 00:29:53.474 00:29:53.474 Run Summary: Type Total Ran Passed Failed Inactive 00:29:53.474 suites 1 1 n/a 0 0 00:29:53.474 tests 23 23 23 0 0 00:29:53.474 asserts 152 152 152 0 n/a 00:29:53.474 00:29:53.474 Elapsed time = 1.190 seconds 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.043 rmmod nvme_tcp 00:29:54.043 rmmod nvme_fabrics 00:29:54.043 rmmod nvme_keyring 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 934851 ']' 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 934851 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 934851 ']' 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 934851 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 934851 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 934851' 00:29:54.043 killing process with pid 934851 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 934851 00:29:54.043 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 934851 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.981 23:10:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.907 23:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.907 00:29:56.907 real 0m8.927s 00:29:56.907 user 0m14.652s 00:29:56.907 sys 0m4.327s 00:29:56.907 23:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:56.907 23:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:56.907 ************************************ 00:29:56.907 END TEST nvmf_bdevio_no_huge 00:29:56.907 ************************************ 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:56.907 ************************************ 00:29:56.907 START TEST nvmf_tls 00:29:56.907 ************************************ 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:29:56.907 * Looking for test storage... 00:29:56.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.907 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.908 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.206 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.206 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:30:00.206 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:00.206 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:00.207 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:00.207 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:00.207 Found net devices under 0000:84:00.0: cvl_0_0 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:00.207 Found net devices under 0000:84:00.1: cvl_0_1 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.207 23:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:00.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:30:00.207 00:30:00.207 --- 10.0.0.2 ping statistics --- 00:30:00.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.207 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:30:00.207 00:30:00.207 --- 10.0.0.1 ping statistics --- 00:30:00.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.207 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.207 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=937224 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 937224 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937224 ']' 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:00.208 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.208 [2024-07-22 23:10:36.230059] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:00.208 [2024-07-22 23:10:36.230158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.208 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.208 [2024-07-22 23:10:36.325956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.208 [2024-07-22 23:10:36.435452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.208 [2024-07-22 23:10:36.435515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.208 [2024-07-22 23:10:36.435536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.208 [2024-07-22 23:10:36.435553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.208 [2024-07-22 23:10:36.435567] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.208 [2024-07-22 23:10:36.435602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:30:00.467 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:30:01.034 true 00:30:01.034 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:01.034 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:30:01.600 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:30:01.600 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:30:01.600 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:30:02.168 23:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:30:02.168 23:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:02.734 23:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:30:02.734 23:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:30:02.735 23:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:30:02.993 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:02.993 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:30:03.560 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:30:03.560 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:30:03.560 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:03.560 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:30:03.819 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:30:03.819 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:30:03.819 23:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:30:04.077 23:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:04.077 23:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:30:04.645 23:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:30:04.645 23:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:30:04.645 23:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:30:04.903 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:04.904 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:30:05.471 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.hpaENrm6Ai 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.D3euJPNQAu 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.hpaENrm6Ai 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.D3euJPNQAu 00:30:05.730 23:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:30:06.296 23:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:30:06.864 23:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.hpaENrm6Ai 00:30:06.864 23:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hpaENrm6Ai 00:30:06.864 23:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:07.806 [2024-07-22 23:10:43.753685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.806 23:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:08.372 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:08.939 [2024-07-22 23:10:45.005154] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:08.939 [2024-07-22 23:10:45.005432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.939 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:09.506 malloc0 00:30:09.506 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:10.073 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hpaENrm6Ai 00:30:10.641 [2024-07-22 23:10:46.808584] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:10.641 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hpaENrm6Ai 00:30:10.641 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.884 Initializing NVMe Controllers 00:30:22.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:22.884 Initialization complete. Launching workers. 00:30:22.884 ======================================================== 00:30:22.884 Latency(us) 00:30:22.884 Device Information : IOPS MiB/s Average min max 00:30:22.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5994.07 23.41 10681.78 1432.44 13026.78 00:30:22.884 ======================================================== 00:30:22.884 Total : 5994.07 23.41 10681.78 1432.44 13026.78 00:30:22.884 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hpaENrm6Ai 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hpaENrm6Ai' 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=939506 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 939506 /var/tmp/bdevperf.sock 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 939506 ']' 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:22.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:22.884 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:22.884 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:22.884 [2024-07-22 23:10:57.066285] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:22.884 [2024-07-22 23:10:57.066450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939506 ] 00:30:22.884 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.884 [2024-07-22 23:10:57.163112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.884 [2024-07-22 23:10:57.273886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.884 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:22.884 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:22.884 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hpaENrm6Ai 00:30:22.884 [2024-07-22 23:10:57.922620] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:22.884 [2024-07-22 23:10:57.922778] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:22.884 TLSTESTn1 00:30:22.884 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:30:22.884 Running I/O for 10 seconds... 00:30:32.866 00:30:32.866 Latency(us) 00:30:32.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.866 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:32.866 Verification LBA range: start 0x0 length 0x2000 00:30:32.866 TLSTESTn1 : 10.03 2569.40 10.04 0.00 0.00 49718.41 8349.77 48351.00 00:30:32.866 =================================================================================================================== 00:30:32.866 Total : 2569.40 10.04 0.00 0.00 49718.41 8349.77 48351.00 00:30:32.866 0 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 939506 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 939506 ']' 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 939506 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939506 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939506' 00:30:32.866 killing process with pid 939506 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 939506 00:30:32.866 Received shutdown signal, test time was about 10.000000 seconds 00:30:32.866 00:30:32.866 Latency(us) 00:30:32.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.866 =================================================================================================================== 00:30:32.866 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.866 [2024-07-22 23:11:08.354464] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 939506 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D3euJPNQAu 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D3euJPNQAu 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D3euJPNQAu 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.D3euJPNQAu' 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=940815 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 940815 /var/tmp/bdevperf.sock 00:30:32.866 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 940815 ']' 00:30:32.867 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.867 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:32.867 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.867 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:32.867 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:32.867 [2024-07-22 23:11:08.684201] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:32.867 [2024-07-22 23:11:08.684303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940815 ] 00:30:32.867 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.867 [2024-07-22 23:11:08.766353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.867 [2024-07-22 23:11:08.875772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.867 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:32.867 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:32.867 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3euJPNQAu 00:30:33.434 [2024-07-22 23:11:09.448127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:33.434 [2024-07-22 23:11:09.448264] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:33.434 [2024-07-22 23:11:09.458919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:33.434 [2024-07-22 23:11:09.459010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c61e30 (107): Transport endpoint is not connected 00:30:33.434 [2024-07-22 23:11:09.459986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c61e30 (9): Bad file descriptor 00:30:33.434 [2024-07-22 23:11:09.460983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:33.434 [2024-07-22 23:11:09.461013] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:33.434 [2024-07-22 23:11:09.461039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:33.434 request: 00:30:33.434 { 00:30:33.434 "name": "TLSTEST", 00:30:33.434 "trtype": "tcp", 00:30:33.434 "traddr": "10.0.0.2", 00:30:33.434 "adrfam": "ipv4", 00:30:33.434 "trsvcid": "4420", 00:30:33.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.434 "prchk_reftag": false, 00:30:33.434 "prchk_guard": false, 00:30:33.434 "hdgst": false, 00:30:33.434 "ddgst": false, 00:30:33.434 "psk": "/tmp/tmp.D3euJPNQAu", 00:30:33.434 "method": "bdev_nvme_attach_controller", 00:30:33.434 "req_id": 1 00:30:33.434 } 00:30:33.434 Got JSON-RPC error response 00:30:33.434 response: 00:30:33.434 { 00:30:33.434 "code": -5, 00:30:33.434 "message": "Input/output error" 00:30:33.434 } 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 940815 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 940815 ']' 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 940815 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940815 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940815' 00:30:33.434 killing process with pid 940815 00:30:33.434 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 940815 00:30:33.434 Received shutdown signal, test time was about 10.000000 seconds 00:30:33.434 00:30:33.434 Latency(us) 00:30:33.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.435 =================================================================================================================== 00:30:33.435 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:33.435 [2024-07-22 23:11:09.518521] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:33.435 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 940815 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hpaENrm6Ai 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hpaENrm6Ai 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hpaENrm6Ai 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hpaENrm6Ai' 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=940952 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 940952 /var/tmp/bdevperf.sock 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 940952 ']' 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:33.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:33.694 23:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:33.694 [2024-07-22 23:11:09.833422] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:33.694 [2024-07-22 23:11:09.833514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940952 ] 00:30:33.694 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.694 [2024-07-22 23:11:09.904347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.953 [2024-07-22 23:11:10.012507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.953 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.953 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:33.953 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.hpaENrm6Ai 00:30:34.212 [2024-07-22 23:11:10.433574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:34.212 [2024-07-22 23:11:10.433728] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:34.212 [2024-07-22 23:11:10.441383] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:30:34.212 [2024-07-22 23:11:10.441426] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:30:34.212 [2024-07-22 23:11:10.441481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:34.212 [2024-07-22 23:11:10.441588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:34.212 [2024-07-22 23:11:10.442551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x888e30 (9): Bad file descriptor 00:30:34.212 [2024-07-22 23:11:10.443548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:34.212 [2024-07-22 23:11:10.443576] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:34.212 [2024-07-22 23:11:10.443600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:34.212 request: 00:30:34.212 { 00:30:34.212 "name": "TLSTEST", 00:30:34.212 "trtype": "tcp", 00:30:34.212 "traddr": "10.0.0.2", 00:30:34.212 "adrfam": "ipv4", 00:30:34.212 "trsvcid": "4420", 00:30:34.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.212 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:34.212 "prchk_reftag": false, 00:30:34.212 "prchk_guard": false, 00:30:34.212 "hdgst": false, 00:30:34.212 "ddgst": false, 00:30:34.212 "psk": "/tmp/tmp.hpaENrm6Ai", 00:30:34.212 "method": "bdev_nvme_attach_controller", 00:30:34.212 "req_id": 1 00:30:34.212 } 00:30:34.212 Got JSON-RPC error response 00:30:34.212 response: 00:30:34.212 { 00:30:34.212 "code": -5, 00:30:34.212 "message": "Input/output error" 00:30:34.212 } 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 940952 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 940952 ']' 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 940952 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940952 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940952' 00:30:34.212 killing process with pid 940952 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 940952 00:30:34.212 Received shutdown signal, test time was about 10.000000 seconds 00:30:34.212 00:30:34.212 Latency(us) 00:30:34.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.212 =================================================================================================================== 00:30:34.212 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:34.212 [2024-07-22 23:11:10.519665] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:34.212 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 940952 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hpaENrm6Ai 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hpaENrm6Ai 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hpaENrm6Ai 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hpaENrm6Ai' 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=941088 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 941088 /var/tmp/bdevperf.sock 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 941088 ']' 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:34.779 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:34.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:34.780 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:34.780 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:34.780 [2024-07-22 23:11:10.835067] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:34.780 [2024-07-22 23:11:10.835150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941088 ] 00:30:34.780 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.780 [2024-07-22 23:11:10.904806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.780 [2024-07-22 23:11:11.008055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.039 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.039 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:35.039 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hpaENrm6Ai 00:30:35.606 [2024-07-22 23:11:11.694972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:35.606 [2024-07-22 23:11:11.695122] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:35.606 [2024-07-22 23:11:11.704685] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:30:35.606 [2024-07-22 23:11:11.704728] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:30:35.606 [2024-07-22 23:11:11.704781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:35.606 [2024-07-22 23:11:11.705122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23e30 (107): Transport endpoint is not connected 00:30:35.606 [2024-07-22 23:11:11.706108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23e30 (9): Bad file descriptor 00:30:35.606 [2024-07-22 23:11:11.707105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:35.606 [2024-07-22 23:11:11.707133] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:35.606 [2024-07-22 23:11:11.707157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:35.606 request: 00:30:35.606 { 00:30:35.606 "name": "TLSTEST", 00:30:35.606 "trtype": "tcp", 00:30:35.606 "traddr": "10.0.0.2", 00:30:35.606 "adrfam": "ipv4", 00:30:35.606 "trsvcid": "4420", 00:30:35.606 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:35.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.606 "prchk_reftag": false, 00:30:35.606 "prchk_guard": false, 00:30:35.606 "hdgst": false, 00:30:35.606 "ddgst": false, 00:30:35.606 "psk": "/tmp/tmp.hpaENrm6Ai", 00:30:35.606 "method": "bdev_nvme_attach_controller", 00:30:35.606 "req_id": 1 00:30:35.606 } 00:30:35.606 Got JSON-RPC error response 00:30:35.606 response: 00:30:35.606 { 00:30:35.606 "code": -5, 00:30:35.606 "message": "Input/output error" 00:30:35.606 } 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 941088 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 941088 ']' 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 941088 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941088 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941088' 00:30:35.606 killing process with pid 941088 00:30:35.606 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 941088 00:30:35.606 Received shutdown signal, test time was about 10.000000 seconds 00:30:35.606 00:30:35.606 Latency(us) 00:30:35.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.607 =================================================================================================================== 00:30:35.607 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:35.607 [2024-07-22 23:11:11.784646] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:35.607 23:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 941088 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=941229 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 941229 /var/tmp/bdevperf.sock 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 941229 ']' 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:35.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.866 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:35.866 [2024-07-22 23:11:12.106715] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:35.866 [2024-07-22 23:11:12.106813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941229 ] 00:30:35.866 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.125 [2024-07-22 23:11:12.189277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.125 [2024-07-22 23:11:12.298089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.385 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:36.385 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:36.385 23:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:30:36.952 [2024-07-22 23:11:13.198658] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:36.952 [2024-07-22 23:11:13.200737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fc230 (9): Bad file descriptor 00:30:36.952 [2024-07-22 23:11:13.201732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:36.952 [2024-07-22 23:11:13.201761] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:36.952 [2024-07-22 23:11:13.201786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:36.952 request: 00:30:36.952 { 00:30:36.952 "name": "TLSTEST", 00:30:36.952 "trtype": "tcp", 00:30:36.952 "traddr": "10.0.0.2", 00:30:36.952 "adrfam": "ipv4", 00:30:36.952 "trsvcid": "4420", 00:30:36.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:36.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:36.952 "prchk_reftag": false, 00:30:36.952 "prchk_guard": false, 00:30:36.952 "hdgst": false, 00:30:36.952 "ddgst": false, 00:30:36.952 "method": "bdev_nvme_attach_controller", 00:30:36.952 "req_id": 1 00:30:36.952 } 00:30:36.952 Got JSON-RPC error response 00:30:36.952 response: 00:30:36.952 { 00:30:36.952 "code": -5, 00:30:36.952 "message": "Input/output error" 00:30:36.952 } 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 941229 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 941229 ']' 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 941229 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941229 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941229' 00:30:36.952 killing process with pid 941229 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 941229 00:30:36.952 Received shutdown signal, test time was about 10.000000 seconds 00:30:36.952 00:30:36.952 Latency(us) 00:30:36.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.952 =================================================================================================================== 00:30:36.952 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:36.952 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 941229 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 937224 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937224 ']' 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937224 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937224 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937224' 00:30:37.517 killing process with pid 937224 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937224 00:30:37.517 [2024-07-22 23:11:13.560302] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:37.517 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937224 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Lx0iZo1Cr9 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Lx0iZo1Cr9 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=941502 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 941502 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 941502 ']' 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.777 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.778 23:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:37.778 [2024-07-22 23:11:14.050629] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:37.778 [2024-07-22 23:11:14.050804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.037 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.037 [2024-07-22 23:11:14.181199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.037 [2024-07-22 23:11:14.289647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.037 [2024-07-22 23:11:14.289718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.037 [2024-07-22 23:11:14.289739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.037 [2024-07-22 23:11:14.289755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.037 [2024-07-22 23:11:14.289769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.037 [2024-07-22 23:11:14.289805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Lx0iZo1Cr9 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lx0iZo1Cr9 00:30:38.295 23:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:38.861 [2024-07-22 23:11:14.990249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.861 23:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:39.428 23:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:39.686 [2024-07-22 23:11:15.940871] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:39.686 [2024-07-22 23:11:15.941173] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.686 23:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:40.252 malloc0 00:30:40.252 23:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:40.510 23:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lx0iZo1Cr9 00:30:41.077 [2024-07-22 23:11:17.295146] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lx0iZo1Cr9 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Lx0iZo1Cr9' 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=941802 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 941802 /var/tmp/bdevperf.sock 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 941802 ']' 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:41.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:41.077 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:41.336 [2024-07-22 23:11:17.410526] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:41.336 [2024-07-22 23:11:17.410653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941802 ] 00:30:41.336 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.336 [2024-07-22 23:11:17.512482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.336 [2024-07-22 23:11:17.624478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.595 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:41.595 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:41.595 23:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lx0iZo1Cr9 00:30:42.530 [2024-07-22 23:11:18.518533] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:42.530 [2024-07-22 23:11:18.518683] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:42.530 TLSTESTn1 00:30:42.530 23:11:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:30:42.530 Running I/O for 10 seconds... 00:30:54.759 00:30:54.759 Latency(us) 00:30:54.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.759 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:54.759 Verification LBA range: start 0x0 length 0x2000 00:30:54.759 TLSTESTn1 : 10.03 2558.76 10.00 0.00 0.00 49917.68 12524.66 39807.05 00:30:54.759 =================================================================================================================== 00:30:54.759 Total : 2558.76 10.00 0.00 0.00 49917.68 12524.66 39807.05 00:30:54.759 0 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 941802 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 941802 ']' 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 941802 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941802 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941802' 00:30:54.759 killing process with pid 941802 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 941802 00:30:54.759 Received shutdown signal, test time was about 10.000000 seconds 00:30:54.759 00:30:54.759 Latency(us) 00:30:54.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.759 =================================================================================================================== 00:30:54.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:54.759 [2024-07-22 23:11:28.972713] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:54.759 23:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 941802 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Lx0iZo1Cr9 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lx0iZo1Cr9 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lx0iZo1Cr9 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lx0iZo1Cr9 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Lx0iZo1Cr9' 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=943126 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 943126 /var/tmp/bdevperf.sock 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 943126 ']' 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:54.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:54.759 [2024-07-22 23:11:29.348106] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:54.759 [2024-07-22 23:11:29.348292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943126 ] 00:30:54.759 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.759 [2024-07-22 23:11:29.466201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.759 [2024-07-22 23:11:29.571932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:54.759 23:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lx0iZo1Cr9 00:30:54.759 [2024-07-22 23:11:30.457751] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:54.759 [2024-07-22 23:11:30.457865] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:30:54.759 [2024-07-22 23:11:30.457886] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Lx0iZo1Cr9 00:30:54.759 request: 00:30:54.759 { 00:30:54.759 "name": "TLSTEST", 00:30:54.759 "trtype": "tcp", 00:30:54.759 "traddr": "10.0.0.2", 00:30:54.759 "adrfam": "ipv4", 00:30:54.759 "trsvcid": "4420", 00:30:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.759 "prchk_reftag": false, 00:30:54.759 "prchk_guard": false, 00:30:54.759 "hdgst": false, 00:30:54.759 "ddgst": false, 00:30:54.759 "psk": "/tmp/tmp.Lx0iZo1Cr9", 00:30:54.759 "method": "bdev_nvme_attach_controller", 00:30:54.759 "req_id": 1 00:30:54.759 } 00:30:54.759 Got JSON-RPC error response 00:30:54.759 response: 00:30:54.759 { 00:30:54.759 "code": -1, 00:30:54.759 "message": "Operation not permitted" 00:30:54.759 } 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 943126 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 943126 ']' 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 943126 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 943126 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 943126' 00:30:54.759 killing process with pid 943126 00:30:54.759 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 943126 00:30:54.759 Received shutdown signal, test time was about 10.000000 seconds 00:30:54.759 00:30:54.759 Latency(us) 00:30:54.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.760 =================================================================================================================== 00:30:54.760 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 943126 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 941502 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 941502 ']' 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 941502 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941502 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941502' 00:30:54.760 killing process with pid 941502 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 941502 00:30:54.760 [2024-07-22 23:11:30.844391] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:54.760 23:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 941502 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=943396 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 943396 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 943396 ']' 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:55.018 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:55.018 [2024-07-22 23:11:31.196976] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:55.018 [2024-07-22 23:11:31.197078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.018 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.018 [2024-07-22 23:11:31.280462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.277 [2024-07-22 23:11:31.391195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.277 [2024-07-22 23:11:31.391268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.277 [2024-07-22 23:11:31.391288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.277 [2024-07-22 23:11:31.391303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.277 [2024-07-22 23:11:31.391329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.277 [2024-07-22 23:11:31.391367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Lx0iZo1Cr9 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Lx0iZo1Cr9 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Lx0iZo1Cr9 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lx0iZo1Cr9 00:30:55.277 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:55.843 [2024-07-22 23:11:31.855488] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.843 23:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:56.102 23:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:56.360 [2024-07-22 23:11:32.513382] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:56.360 [2024-07-22 23:11:32.513679] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.360 23:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:56.926 malloc0 00:30:56.926 23:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:57.497 23:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lx0iZo1Cr9 00:30:58.062 [2024-07-22 23:11:34.325167] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:30:58.062 [2024-07-22 23:11:34.325222] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:30:58.062 [2024-07-22 23:11:34.325266] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:30:58.062 request: 00:30:58.062 { 00:30:58.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.062 "host": "nqn.2016-06.io.spdk:host1", 00:30:58.062 "psk": "/tmp/tmp.Lx0iZo1Cr9", 00:30:58.062 "method": "nvmf_subsystem_add_host", 00:30:58.062 "req_id": 1 00:30:58.062 } 00:30:58.062 Got JSON-RPC error response 00:30:58.062 response: 00:30:58.062 { 00:30:58.062 "code": -32603, 00:30:58.062 "message": "Internal error" 00:30:58.062 } 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 943396 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 943396 ']' 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 943396 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:58.062 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 943396 00:30:58.320 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:58.320 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:58.320 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 943396' 00:30:58.320 killing process with pid 943396 00:30:58.321 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 943396 00:30:58.321 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 943396 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Lx0iZo1Cr9 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=943786 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 943786 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 943786 ']' 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:58.579 23:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:58.579 [2024-07-22 23:11:34.771804] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:58.579 [2024-07-22 23:11:34.771937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.579 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.579 [2024-07-22 23:11:34.879143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.837 [2024-07-22 23:11:34.988835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.837 [2024-07-22 23:11:34.988900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.837 [2024-07-22 23:11:34.988919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.837 [2024-07-22 23:11:34.988935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.837 [2024-07-22 23:11:34.988949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.837 [2024-07-22 23:11:34.988986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.837 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:58.837 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:58.837 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:58.837 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:58.837 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:59.096 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.096 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Lx0iZo1Cr9 00:30:59.096 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lx0iZo1Cr9 00:30:59.096 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:59.354 [2024-07-22 23:11:35.488998] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.354 23:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:59.921 23:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:00.179 [2024-07-22 23:11:36.467695] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:00.179 [2024-07-22 23:11:36.467990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.179 23:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:01.114 malloc0 00:31:01.114 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:01.372 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lx0iZo1Cr9 00:31:01.629 [2024-07-22 23:11:37.797954] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=944112 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 944112 /var/tmp/bdevperf.sock 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944112 ']' 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.629 23:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:01.629 [2024-07-22 23:11:37.911523] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:01.629 [2024-07-22 23:11:37.911695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944112 ] 00:31:01.887 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.887 [2024-07-22 23:11:38.030470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.888 [2024-07-22 23:11:38.147145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.146 23:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:02.146 23:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:02.146 23:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lx0iZo1Cr9 00:31:03.078 [2024-07-22 23:11:39.066438] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:03.078 [2024-07-22 23:11:39.066576] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:03.078 TLSTESTn1 00:31:03.078 23:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:31:03.336 23:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:31:03.336 "subsystems": [ 00:31:03.336 { 00:31:03.336 "subsystem": "keyring", 00:31:03.336 "config": [] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "iobuf", 00:31:03.336 "config": [ 00:31:03.336 { 00:31:03.336 "method": "iobuf_set_options", 00:31:03.336 "params": { 00:31:03.336 "small_pool_count": 8192, 00:31:03.336 "large_pool_count": 1024, 00:31:03.336 "small_bufsize": 8192, 00:31:03.336 "large_bufsize": 135168 00:31:03.336 } 00:31:03.336 } 00:31:03.336 ] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "sock", 00:31:03.336 "config": [ 00:31:03.336 { 00:31:03.336 "method": "sock_set_default_impl", 00:31:03.336 "params": { 00:31:03.336 "impl_name": "posix" 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "sock_impl_set_options", 00:31:03.336 "params": { 00:31:03.336 "impl_name": "ssl", 00:31:03.336 "recv_buf_size": 4096, 00:31:03.336 "send_buf_size": 4096, 00:31:03.336 "enable_recv_pipe": true, 00:31:03.336 "enable_quickack": false, 00:31:03.336 "enable_placement_id": 0, 00:31:03.336 "enable_zerocopy_send_server": true, 00:31:03.336 "enable_zerocopy_send_client": false, 00:31:03.336 "zerocopy_threshold": 0, 00:31:03.336 "tls_version": 0, 00:31:03.336 "enable_ktls": false 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "sock_impl_set_options", 00:31:03.336 "params": { 00:31:03.336 "impl_name": "posix", 00:31:03.336 "recv_buf_size": 2097152, 00:31:03.336 "send_buf_size": 2097152, 00:31:03.336 "enable_recv_pipe": true, 00:31:03.336 "enable_quickack": false, 00:31:03.336 "enable_placement_id": 0, 00:31:03.336 "enable_zerocopy_send_server": true, 00:31:03.336 "enable_zerocopy_send_client": false, 00:31:03.336 "zerocopy_threshold": 0, 00:31:03.336 "tls_version": 0, 00:31:03.336 "enable_ktls": false 00:31:03.336 } 00:31:03.336 } 00:31:03.336 ] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "vmd", 00:31:03.336 "config": [] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "accel", 00:31:03.336 "config": [ 00:31:03.336 { 00:31:03.336 "method": "accel_set_options", 00:31:03.336 "params": { 00:31:03.336 "small_cache_size": 128, 00:31:03.336 "large_cache_size": 16, 00:31:03.336 "task_count": 2048, 00:31:03.336 "sequence_count": 2048, 00:31:03.336 "buf_count": 2048 00:31:03.336 } 00:31:03.336 } 00:31:03.336 ] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "bdev", 00:31:03.336 "config": [ 00:31:03.336 { 00:31:03.336 "method": "bdev_set_options", 00:31:03.336 "params": { 00:31:03.336 "bdev_io_pool_size": 65535, 00:31:03.336 "bdev_io_cache_size": 256, 00:31:03.336 "bdev_auto_examine": true, 00:31:03.336 "iobuf_small_cache_size": 128, 00:31:03.336 "iobuf_large_cache_size": 16 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "bdev_raid_set_options", 00:31:03.336 "params": { 00:31:03.336 "process_window_size_kb": 1024, 00:31:03.336 "process_max_bandwidth_mb_sec": 0 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "bdev_iscsi_set_options", 00:31:03.336 "params": { 00:31:03.336 "timeout_sec": 30 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "bdev_nvme_set_options", 00:31:03.336 "params": { 00:31:03.336 "action_on_timeout": "none", 00:31:03.336 "timeout_us": 0, 00:31:03.336 "timeout_admin_us": 0, 00:31:03.336 "keep_alive_timeout_ms": 10000, 00:31:03.336 "arbitration_burst": 0, 00:31:03.336 "low_priority_weight": 0, 00:31:03.336 "medium_priority_weight": 0, 00:31:03.336 "high_priority_weight": 0, 00:31:03.336 "nvme_adminq_poll_period_us": 10000, 00:31:03.336 "nvme_ioq_poll_period_us": 0, 00:31:03.336 "io_queue_requests": 0, 00:31:03.336 "delay_cmd_submit": true, 00:31:03.336 "transport_retry_count": 4, 00:31:03.336 "bdev_retry_count": 3, 00:31:03.336 "transport_ack_timeout": 0, 00:31:03.336 "ctrlr_loss_timeout_sec": 0, 00:31:03.336 "reconnect_delay_sec": 0, 00:31:03.336 "fast_io_fail_timeout_sec": 0, 00:31:03.336 "disable_auto_failback": false, 00:31:03.336 "generate_uuids": false, 00:31:03.336 "transport_tos": 0, 00:31:03.336 "nvme_error_stat": false, 00:31:03.336 "rdma_srq_size": 0, 00:31:03.336 "io_path_stat": false, 00:31:03.336 "allow_accel_sequence": false, 00:31:03.336 "rdma_max_cq_size": 0, 00:31:03.336 "rdma_cm_event_timeout_ms": 0, 00:31:03.336 "dhchap_digests": [ 00:31:03.336 "sha256", 00:31:03.336 "sha384", 00:31:03.336 "sha512" 00:31:03.336 ], 00:31:03.336 "dhchap_dhgroups": [ 00:31:03.336 "null", 00:31:03.336 "ffdhe2048", 00:31:03.336 "ffdhe3072", 00:31:03.336 "ffdhe4096", 00:31:03.336 "ffdhe6144", 00:31:03.336 "ffdhe8192" 00:31:03.336 ] 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "bdev_nvme_set_hotplug", 00:31:03.336 "params": { 00:31:03.336 "period_us": 100000, 00:31:03.336 "enable": false 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "bdev_malloc_create", 00:31:03.336 "params": { 00:31:03.336 "name": "malloc0", 00:31:03.336 "num_blocks": 8192, 00:31:03.336 "block_size": 4096, 00:31:03.336 "physical_block_size": 4096, 00:31:03.336 "uuid": "9919582d-abfc-41e3-8aab-cb5715a03122", 00:31:03.336 "optimal_io_boundary": 0, 00:31:03.336 "md_size": 0, 00:31:03.336 "dif_type": 0, 00:31:03.336 "dif_is_head_of_md": false, 00:31:03.336 "dif_pi_format": 0 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "bdev_wait_for_examine" 00:31:03.336 } 00:31:03.336 ] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "nbd", 00:31:03.336 "config": [] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "scheduler", 00:31:03.336 "config": [ 00:31:03.336 { 00:31:03.336 "method": "framework_set_scheduler", 00:31:03.336 "params": { 00:31:03.336 "name": "static" 00:31:03.336 } 00:31:03.336 } 00:31:03.336 ] 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "subsystem": "nvmf", 00:31:03.336 "config": [ 00:31:03.336 { 00:31:03.336 "method": "nvmf_set_config", 00:31:03.336 "params": { 00:31:03.336 "discovery_filter": "match_any", 00:31:03.336 "admin_cmd_passthru": { 00:31:03.336 "identify_ctrlr": false 00:31:03.336 } 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "nvmf_set_max_subsystems", 00:31:03.336 "params": { 00:31:03.336 "max_subsystems": 1024 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "nvmf_set_crdt", 00:31:03.336 "params": { 00:31:03.336 "crdt1": 0, 00:31:03.336 "crdt2": 0, 00:31:03.336 "crdt3": 0 00:31:03.336 } 00:31:03.336 }, 00:31:03.336 { 00:31:03.336 "method": "nvmf_create_transport", 00:31:03.336 "params": { 00:31:03.336 "trtype": "TCP", 00:31:03.336 "max_queue_depth": 128, 00:31:03.336 "max_io_qpairs_per_ctrlr": 127, 00:31:03.336 "in_capsule_data_size": 4096, 00:31:03.336 "max_io_size": 131072, 00:31:03.336 "io_unit_size": 131072, 00:31:03.336 "max_aq_depth": 128, 00:31:03.336 "num_shared_buffers": 511, 00:31:03.337 "buf_cache_size": 4294967295, 00:31:03.337 "dif_insert_or_strip": false, 00:31:03.337 "zcopy": false, 00:31:03.337 "c2h_success": false, 00:31:03.337 "sock_priority": 0, 00:31:03.337 "abort_timeout_sec": 1, 00:31:03.337 "ack_timeout": 0, 00:31:03.337 "data_wr_pool_size": 0 00:31:03.337 } 00:31:03.337 }, 00:31:03.337 { 00:31:03.337 "method": "nvmf_create_subsystem", 00:31:03.337 "params": { 00:31:03.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.337 "allow_any_host": false, 00:31:03.337 "serial_number": "SPDK00000000000001", 00:31:03.337 "model_number": "SPDK bdev Controller", 00:31:03.337 "max_namespaces": 10, 00:31:03.337 "min_cntlid": 1, 00:31:03.337 "max_cntlid": 65519, 00:31:03.337 "ana_reporting": false 00:31:03.337 } 00:31:03.337 }, 00:31:03.337 { 00:31:03.337 "method": "nvmf_subsystem_add_host", 00:31:03.337 "params": { 00:31:03.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.337 "host": "nqn.2016-06.io.spdk:host1", 00:31:03.337 "psk": "/tmp/tmp.Lx0iZo1Cr9" 00:31:03.337 } 00:31:03.337 }, 00:31:03.337 { 00:31:03.337 "method": "nvmf_subsystem_add_ns", 00:31:03.337 "params": { 00:31:03.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.337 "namespace": { 00:31:03.337 "nsid": 1, 00:31:03.337 "bdev_name": "malloc0", 00:31:03.337 "nguid": "9919582DABFC41E38AABCB5715A03122", 00:31:03.337 "uuid": "9919582d-abfc-41e3-8aab-cb5715a03122", 00:31:03.337 "no_auto_visible": false 00:31:03.337 } 00:31:03.337 } 00:31:03.337 }, 00:31:03.337 { 00:31:03.337 "method": "nvmf_subsystem_add_listener", 00:31:03.337 "params": { 00:31:03.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.337 "listen_address": { 00:31:03.337 "trtype": "TCP", 00:31:03.337 "adrfam": "IPv4", 00:31:03.337 "traddr": "10.0.0.2", 00:31:03.337 "trsvcid": "4420" 00:31:03.337 }, 00:31:03.337 "secure_channel": true 00:31:03.337 } 00:31:03.337 } 00:31:03.337 ] 00:31:03.337 } 00:31:03.337 ] 00:31:03.337 }' 00:31:03.337 23:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:31:04.268 "subsystems": [ 00:31:04.268 { 00:31:04.268 "subsystem": "keyring", 00:31:04.268 "config": [] 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "subsystem": "iobuf", 00:31:04.268 "config": [ 00:31:04.268 { 00:31:04.268 "method": "iobuf_set_options", 00:31:04.268 "params": { 00:31:04.268 "small_pool_count": 8192, 00:31:04.268 "large_pool_count": 1024, 00:31:04.268 "small_bufsize": 8192, 00:31:04.268 "large_bufsize": 135168 00:31:04.268 } 00:31:04.268 } 00:31:04.268 ] 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "subsystem": "sock", 00:31:04.268 "config": [ 00:31:04.268 { 00:31:04.268 "method": "sock_set_default_impl", 00:31:04.268 "params": { 00:31:04.268 "impl_name": "posix" 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "sock_impl_set_options", 00:31:04.268 "params": { 00:31:04.268 "impl_name": "ssl", 00:31:04.268 "recv_buf_size": 4096, 00:31:04.268 "send_buf_size": 4096, 00:31:04.268 "enable_recv_pipe": true, 00:31:04.268 "enable_quickack": false, 00:31:04.268 "enable_placement_id": 0, 00:31:04.268 "enable_zerocopy_send_server": true, 00:31:04.268 "enable_zerocopy_send_client": false, 00:31:04.268 "zerocopy_threshold": 0, 00:31:04.268 "tls_version": 0, 00:31:04.268 "enable_ktls": false 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "sock_impl_set_options", 00:31:04.268 "params": { 00:31:04.268 "impl_name": "posix", 00:31:04.268 "recv_buf_size": 2097152, 00:31:04.268 "send_buf_size": 2097152, 00:31:04.268 "enable_recv_pipe": true, 00:31:04.268 "enable_quickack": false, 00:31:04.268 "enable_placement_id": 0, 00:31:04.268 "enable_zerocopy_send_server": true, 00:31:04.268 "enable_zerocopy_send_client": false, 00:31:04.268 "zerocopy_threshold": 0, 00:31:04.268 "tls_version": 0, 00:31:04.268 "enable_ktls": false 00:31:04.268 } 00:31:04.268 } 00:31:04.268 ] 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "subsystem": "vmd", 00:31:04.268 "config": [] 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "subsystem": "accel", 00:31:04.268 "config": [ 00:31:04.268 { 00:31:04.268 "method": "accel_set_options", 00:31:04.268 "params": { 00:31:04.268 "small_cache_size": 128, 00:31:04.268 "large_cache_size": 16, 00:31:04.268 "task_count": 2048, 00:31:04.268 "sequence_count": 2048, 00:31:04.268 "buf_count": 2048 00:31:04.268 } 00:31:04.268 } 00:31:04.268 ] 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "subsystem": "bdev", 00:31:04.268 "config": [ 00:31:04.268 { 00:31:04.268 "method": "bdev_set_options", 00:31:04.268 "params": { 00:31:04.268 "bdev_io_pool_size": 65535, 00:31:04.268 "bdev_io_cache_size": 256, 00:31:04.268 "bdev_auto_examine": true, 00:31:04.268 "iobuf_small_cache_size": 128, 00:31:04.268 "iobuf_large_cache_size": 16 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "bdev_raid_set_options", 00:31:04.268 "params": { 00:31:04.268 "process_window_size_kb": 1024, 00:31:04.268 "process_max_bandwidth_mb_sec": 0 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "bdev_iscsi_set_options", 00:31:04.268 "params": { 00:31:04.268 "timeout_sec": 30 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "bdev_nvme_set_options", 00:31:04.268 "params": { 00:31:04.268 "action_on_timeout": "none", 00:31:04.268 "timeout_us": 0, 00:31:04.268 "timeout_admin_us": 0, 00:31:04.268 "keep_alive_timeout_ms": 10000, 00:31:04.268 "arbitration_burst": 0, 00:31:04.268 "low_priority_weight": 0, 00:31:04.268 "medium_priority_weight": 0, 00:31:04.268 "high_priority_weight": 0, 00:31:04.268 "nvme_adminq_poll_period_us": 10000, 00:31:04.268 "nvme_ioq_poll_period_us": 0, 00:31:04.268 "io_queue_requests": 512, 00:31:04.268 "delay_cmd_submit": true, 00:31:04.268 "transport_retry_count": 4, 00:31:04.268 "bdev_retry_count": 3, 00:31:04.268 "transport_ack_timeout": 0, 00:31:04.268 "ctrlr_loss_timeout_sec": 0, 00:31:04.268 "reconnect_delay_sec": 0, 00:31:04.268 "fast_io_fail_timeout_sec": 0, 00:31:04.268 "disable_auto_failback": false, 00:31:04.268 "generate_uuids": false, 00:31:04.268 "transport_tos": 0, 00:31:04.268 "nvme_error_stat": false, 00:31:04.268 "rdma_srq_size": 0, 00:31:04.268 "io_path_stat": false, 00:31:04.268 "allow_accel_sequence": false, 00:31:04.268 "rdma_max_cq_size": 0, 00:31:04.268 "rdma_cm_event_timeout_ms": 0, 00:31:04.268 "dhchap_digests": [ 00:31:04.268 "sha256", 00:31:04.268 "sha384", 00:31:04.268 "sha512" 00:31:04.268 ], 00:31:04.268 "dhchap_dhgroups": [ 00:31:04.268 "null", 00:31:04.268 "ffdhe2048", 00:31:04.268 "ffdhe3072", 00:31:04.268 "ffdhe4096", 00:31:04.268 "ffdhe6144", 00:31:04.268 "ffdhe8192" 00:31:04.268 ] 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "bdev_nvme_attach_controller", 00:31:04.268 "params": { 00:31:04.268 "name": "TLSTEST", 00:31:04.268 "trtype": "TCP", 00:31:04.268 "adrfam": "IPv4", 00:31:04.268 "traddr": "10.0.0.2", 00:31:04.268 "trsvcid": "4420", 00:31:04.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.268 "prchk_reftag": false, 00:31:04.268 "prchk_guard": false, 00:31:04.268 "ctrlr_loss_timeout_sec": 0, 00:31:04.268 "reconnect_delay_sec": 0, 00:31:04.268 "fast_io_fail_timeout_sec": 0, 00:31:04.268 "psk": "/tmp/tmp.Lx0iZo1Cr9", 00:31:04.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:04.268 "hdgst": false, 00:31:04.268 "ddgst": false 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "bdev_nvme_set_hotplug", 00:31:04.268 "params": { 00:31:04.268 "period_us": 100000, 00:31:04.268 "enable": false 00:31:04.268 } 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "method": "bdev_wait_for_examine" 00:31:04.268 } 00:31:04.268 ] 00:31:04.268 }, 00:31:04.268 { 00:31:04.268 "subsystem": "nbd", 00:31:04.268 "config": [] 00:31:04.268 } 00:31:04.268 ] 00:31:04.268 }' 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 944112 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944112 ']' 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944112 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944112 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944112' 00:31:04.268 killing process with pid 944112 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944112 00:31:04.268 Received shutdown signal, test time was about 10.000000 seconds 00:31:04.268 00:31:04.268 Latency(us) 00:31:04.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.268 =================================================================================================================== 00:31:04.268 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:04.268 [2024-07-22 23:11:40.352793] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:04.268 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944112 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 943786 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 943786 ']' 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 943786 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 943786 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 943786' 00:31:04.526 killing process with pid 943786 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 943786 00:31:04.526 [2024-07-22 23:11:40.658822] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:04.526 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 943786 00:31:04.784 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:31:04.784 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:31:04.784 "subsystems": [ 00:31:04.784 { 00:31:04.784 "subsystem": "keyring", 00:31:04.784 "config": [] 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "subsystem": "iobuf", 00:31:04.784 "config": [ 00:31:04.784 { 00:31:04.784 "method": "iobuf_set_options", 00:31:04.784 "params": { 00:31:04.784 "small_pool_count": 8192, 00:31:04.784 "large_pool_count": 1024, 00:31:04.784 "small_bufsize": 8192, 00:31:04.784 "large_bufsize": 135168 00:31:04.784 } 00:31:04.784 } 00:31:04.784 ] 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "subsystem": "sock", 00:31:04.784 "config": [ 00:31:04.784 { 00:31:04.784 "method": "sock_set_default_impl", 00:31:04.784 "params": { 00:31:04.784 "impl_name": "posix" 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "sock_impl_set_options", 00:31:04.784 "params": { 00:31:04.784 "impl_name": "ssl", 00:31:04.784 "recv_buf_size": 4096, 00:31:04.784 "send_buf_size": 4096, 00:31:04.784 "enable_recv_pipe": true, 00:31:04.784 "enable_quickack": false, 00:31:04.784 "enable_placement_id": 0, 00:31:04.784 "enable_zerocopy_send_server": true, 00:31:04.784 "enable_zerocopy_send_client": false, 00:31:04.784 "zerocopy_threshold": 0, 00:31:04.784 "tls_version": 0, 00:31:04.784 "enable_ktls": false 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "sock_impl_set_options", 00:31:04.784 "params": { 00:31:04.784 "impl_name": "posix", 00:31:04.784 "recv_buf_size": 2097152, 00:31:04.784 "send_buf_size": 2097152, 00:31:04.784 "enable_recv_pipe": true, 00:31:04.784 "enable_quickack": false, 00:31:04.784 "enable_placement_id": 0, 00:31:04.784 "enable_zerocopy_send_server": true, 00:31:04.784 "enable_zerocopy_send_client": false, 00:31:04.784 "zerocopy_threshold": 0, 00:31:04.784 "tls_version": 0, 00:31:04.784 "enable_ktls": false 00:31:04.784 } 00:31:04.784 } 00:31:04.784 ] 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "subsystem": "vmd", 00:31:04.784 "config": [] 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "subsystem": "accel", 00:31:04.784 "config": [ 00:31:04.784 { 00:31:04.784 "method": "accel_set_options", 00:31:04.784 "params": { 00:31:04.784 "small_cache_size": 128, 00:31:04.784 "large_cache_size": 16, 00:31:04.784 "task_count": 2048, 00:31:04.784 "sequence_count": 2048, 00:31:04.784 "buf_count": 2048 00:31:04.784 } 00:31:04.784 } 00:31:04.784 ] 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "subsystem": "bdev", 00:31:04.784 "config": [ 00:31:04.784 { 00:31:04.784 "method": "bdev_set_options", 00:31:04.784 "params": { 00:31:04.784 "bdev_io_pool_size": 65535, 00:31:04.784 "bdev_io_cache_size": 256, 00:31:04.784 "bdev_auto_examine": true, 00:31:04.784 "iobuf_small_cache_size": 128, 00:31:04.784 "iobuf_large_cache_size": 16 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "bdev_raid_set_options", 00:31:04.784 "params": { 00:31:04.784 "process_window_size_kb": 1024, 00:31:04.784 "process_max_bandwidth_mb_sec": 0 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "bdev_iscsi_set_options", 00:31:04.784 "params": { 00:31:04.784 "timeout_sec": 30 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "bdev_nvme_set_options", 00:31:04.784 "params": { 00:31:04.784 "action_on_timeout": "none", 00:31:04.784 "timeout_us": 0, 00:31:04.784 "timeout_admin_us": 0, 00:31:04.784 "keep_alive_timeout_ms": 10000, 00:31:04.784 "arbitration_burst": 0, 00:31:04.784 "low_priority_weight": 0, 00:31:04.784 "medium_priority_weight": 0, 00:31:04.784 "high_priority_weight": 0, 00:31:04.784 "nvme_adminq_poll_period_us": 10000, 00:31:04.784 "nvme_ioq_poll_period_us": 0, 00:31:04.784 "io_queue_requests": 0, 00:31:04.784 "delay_cmd_submit": true, 00:31:04.784 "transport_retry_count": 4, 00:31:04.784 "bdev_retry_count": 3, 00:31:04.784 "transport_ack_timeout": 0, 00:31:04.784 "ctrlr_loss_timeout_sec": 0, 00:31:04.784 "reconnect_delay_sec": 0, 00:31:04.784 "fast_io_fail_timeout_sec": 0, 00:31:04.784 "disable_auto_failback": false, 00:31:04.784 "generate_uuids": false, 00:31:04.784 "transport_tos": 0, 00:31:04.784 "nvme_error_stat": false, 00:31:04.784 "rdma_srq_size": 0, 00:31:04.784 "io_path_stat": false, 00:31:04.784 "allow_accel_sequence": false, 00:31:04.784 "rdma_max_cq_size": 0, 00:31:04.784 "rdma_cm_event_timeout_ms": 0, 00:31:04.784 "dhchap_digests": [ 00:31:04.784 "sha256", 00:31:04.784 "sha384", 00:31:04.784 "sha512" 00:31:04.784 ], 00:31:04.784 "dhchap_dhgroups": [ 00:31:04.784 "null", 00:31:04.784 "ffdhe2048", 00:31:04.784 "ffdhe3072", 00:31:04.784 "ffdhe4096", 00:31:04.784 "ffdhe6144", 00:31:04.784 "ffdhe8192" 00:31:04.784 ] 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "bdev_nvme_set_hotplug", 00:31:04.784 "params": { 00:31:04.784 "period_us": 100000, 00:31:04.784 "enable": false 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "bdev_malloc_create", 00:31:04.784 "params": { 00:31:04.784 "name": "malloc0", 00:31:04.784 "num_blocks": 8192, 00:31:04.784 "block_size": 4096, 00:31:04.784 "physical_block_size": 4096, 00:31:04.784 "uuid": "9919582d-abfc-41e3-8aab-cb5715a03122", 00:31:04.784 "optimal_io_boundary": 0, 00:31:04.784 "md_size": 0, 00:31:04.784 "dif_type": 0, 00:31:04.784 "dif_is_head_of_md": false, 00:31:04.784 "dif_pi_format": 0 00:31:04.784 } 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "method": "bdev_wait_for_examine" 00:31:04.784 } 00:31:04.784 ] 00:31:04.784 }, 00:31:04.784 { 00:31:04.784 "subsystem": "nbd", 00:31:04.784 "config": [] 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "subsystem": "scheduler", 00:31:04.785 "config": [ 00:31:04.785 { 00:31:04.785 "method": "framework_set_scheduler", 00:31:04.785 "params": { 00:31:04.785 "name": "static" 00:31:04.785 } 00:31:04.785 } 00:31:04.785 ] 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "subsystem": "nvmf", 00:31:04.785 "config": [ 00:31:04.785 { 00:31:04.785 "method": "nvmf_set_config", 00:31:04.785 "params": { 00:31:04.785 "discovery_filter": "match_any", 00:31:04.785 "admin_cmd_passthru": { 00:31:04.785 "identify_ctrlr": false 00:31:04.785 } 00:31:04.785 } 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "method": "nvmf_set_max_subsystems", 00:31:04.785 "params": { 00:31:04.785 "max_subsystems": 1024 00:31:04.785 } 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "method": "nvmf_set_crdt", 00:31:04.785 "params": { 00:31:04.785 "crdt1": 0, 00:31:04.785 "crdt2": 0, 00:31:04.785 "crdt3": 0 00:31:04.785 } 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "method": "nvmf_create_transport", 00:31:04.785 "params": { 00:31:04.785 "trtype": "TCP", 00:31:04.785 "max_queue_depth": 128, 00:31:04.785 "max_io_qpairs_per_ctrlr": 127, 00:31:04.785 "in_capsule_data_size": 4096, 00:31:04.785 "max_io_size": 131072, 00:31:04.785 "io_unit_size": 131072, 00:31:04.785 "max_aq_depth": 128, 00:31:04.785 "num_shared_buffers": 511, 00:31:04.785 "buf_cache_size": 4294967295, 00:31:04.785 "dif_insert_or_strip": false, 00:31:04.785 "zcopy": false, 00:31:04.785 "c2h_success": false, 00:31:04.785 "sock_priority": 0, 00:31:04.785 "abort_timeout_sec": 1, 00:31:04.785 "ack_timeout": 0, 00:31:04.785 "data_wr_pool_size": 0 00:31:04.785 } 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "method": "nvmf_create_subsystem", 00:31:04.785 "params": { 00:31:04.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.785 "allow_any_host": false, 00:31:04.785 "serial_number": "SPDK00000000000001", 00:31:04.785 "model_number": "SPDK bdev Controller", 00:31:04.785 "max_namespaces": 10, 00:31:04.785 "min_cntlid": 1, 00:31:04.785 "max_cntlid": 65519, 00:31:04.785 "ana_reporting": false 00:31:04.785 } 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "method": "nvmf_subsystem_add_host", 00:31:04.785 "params": { 00:31:04.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.785 "host": "nqn.2016-06.io.spdk:host1", 00:31:04.785 "psk": "/tmp/tmp.Lx0iZo1Cr9" 00:31:04.785 } 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "method": "nvmf_subsystem_add_ns", 00:31:04.785 "params": { 00:31:04.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.785 "namespace": { 00:31:04.785 "nsid": 1, 00:31:04.785 "bdev_name": "malloc0", 00:31:04.785 "nguid": "9919582DABFC41E38AABCB5715A03122", 00:31:04.785 "uuid": "9919582d-abfc-41e3-8aab-cb5715a03122", 00:31:04.785 "no_auto_visible": false 00:31:04.785 } 00:31:04.785 } 00:31:04.785 }, 00:31:04.785 { 00:31:04.785 "method": "nvmf_subsystem_add_listener", 00:31:04.785 "params": { 00:31:04.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.785 "listen_address": { 00:31:04.785 "trtype": "TCP", 00:31:04.785 "adrfam": "IPv4", 00:31:04.785 "traddr": "10.0.0.2", 00:31:04.785 "trsvcid": "4420" 00:31:04.785 }, 00:31:04.785 "secure_channel": true 00:31:04.785 } 00:31:04.785 } 00:31:04.785 ] 00:31:04.785 } 00:31:04.785 ] 00:31:04.785 }' 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=944516 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 944516 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944516 ']' 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:04.785 23:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:04.785 [2024-07-22 23:11:41.021109] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:04.785 [2024-07-22 23:11:41.021218] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.785 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.043 [2024-07-22 23:11:41.112050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.043 [2024-07-22 23:11:41.220850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.043 [2024-07-22 23:11:41.220918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.043 [2024-07-22 23:11:41.220937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.043 [2024-07-22 23:11:41.220954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.043 [2024-07-22 23:11:41.220968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.043 [2024-07-22 23:11:41.221077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.301 [2024-07-22 23:11:41.476654] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.301 [2024-07-22 23:11:41.498278] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:05.301 [2024-07-22 23:11:41.514344] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:05.301 [2024-07-22 23:11:41.514640] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.867 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.867 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:05.867 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=944669 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 944669 /var/tmp/bdevperf.sock 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:31:06.125 "subsystems": [ 00:31:06.125 { 00:31:06.125 "subsystem": "keyring", 00:31:06.125 "config": [] 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "subsystem": "iobuf", 00:31:06.125 "config": [ 00:31:06.125 { 00:31:06.125 "method": "iobuf_set_options", 00:31:06.125 "params": { 00:31:06.125 "small_pool_count": 8192, 00:31:06.125 "large_pool_count": 1024, 00:31:06.125 "small_bufsize": 8192, 00:31:06.125 "large_bufsize": 135168 00:31:06.125 } 00:31:06.125 } 00:31:06.125 ] 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "subsystem": "sock", 00:31:06.125 "config": [ 00:31:06.125 { 00:31:06.125 "method": "sock_set_default_impl", 00:31:06.125 "params": { 00:31:06.125 "impl_name": "posix" 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "sock_impl_set_options", 00:31:06.125 "params": { 00:31:06.125 "impl_name": "ssl", 00:31:06.125 "recv_buf_size": 4096, 00:31:06.125 "send_buf_size": 4096, 00:31:06.125 "enable_recv_pipe": true, 00:31:06.125 "enable_quickack": false, 00:31:06.125 "enable_placement_id": 0, 00:31:06.125 "enable_zerocopy_send_server": true, 00:31:06.125 "enable_zerocopy_send_client": false, 00:31:06.125 "zerocopy_threshold": 0, 00:31:06.125 "tls_version": 0, 00:31:06.125 "enable_ktls": false 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "sock_impl_set_options", 00:31:06.125 "params": { 00:31:06.125 "impl_name": "posix", 00:31:06.125 "recv_buf_size": 2097152, 00:31:06.125 "send_buf_size": 2097152, 00:31:06.125 "enable_recv_pipe": true, 00:31:06.125 "enable_quickack": false, 00:31:06.125 "enable_placement_id": 0, 00:31:06.125 "enable_zerocopy_send_server": true, 00:31:06.125 "enable_zerocopy_send_client": false, 00:31:06.125 "zerocopy_threshold": 0, 00:31:06.125 "tls_version": 0, 00:31:06.125 "enable_ktls": false 00:31:06.125 } 00:31:06.125 } 00:31:06.125 ] 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "subsystem": "vmd", 00:31:06.125 "config": [] 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "subsystem": "accel", 00:31:06.125 "config": [ 00:31:06.125 { 00:31:06.125 "method": "accel_set_options", 00:31:06.125 "params": { 00:31:06.125 "small_cache_size": 128, 00:31:06.125 "large_cache_size": 16, 00:31:06.125 "task_count": 2048, 00:31:06.125 "sequence_count": 2048, 00:31:06.125 "buf_count": 2048 00:31:06.125 } 00:31:06.125 } 00:31:06.125 ] 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "subsystem": "bdev", 00:31:06.125 "config": [ 00:31:06.125 { 00:31:06.125 "method": "bdev_set_options", 00:31:06.125 "params": { 00:31:06.125 "bdev_io_pool_size": 65535, 00:31:06.125 "bdev_io_cache_size": 256, 00:31:06.125 "bdev_auto_examine": true, 00:31:06.125 "iobuf_small_cache_size": 128, 00:31:06.125 "iobuf_large_cache_size": 16 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "bdev_raid_set_options", 00:31:06.125 "params": { 00:31:06.125 "process_window_size_kb": 1024, 00:31:06.125 "process_max_bandwidth_mb_sec": 0 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "bdev_iscsi_set_options", 00:31:06.125 "params": { 00:31:06.125 "timeout_sec": 30 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "bdev_nvme_set_options", 00:31:06.125 "params": { 00:31:06.125 "action_on_timeout": "none", 00:31:06.125 "timeout_us": 0, 00:31:06.125 "timeout_admin_us": 0, 00:31:06.125 "keep_alive_timeout_ms": 10000, 00:31:06.125 "arbitration_burst": 0, 00:31:06.125 "low_priority_weight": 0, 00:31:06.125 "medium_priority_weight": 0, 00:31:06.125 "high_priority_weight": 0, 00:31:06.125 "nvme_adminq_poll_period_us": 10000, 00:31:06.125 "nvme_ioq_poll_period_us": 0, 00:31:06.125 "io_queue_requests": 512, 00:31:06.125 "delay_cmd_submit": true, 00:31:06.125 "transport_retry_count": 4, 00:31:06.125 "bdev_retry_count": 3, 00:31:06.125 "transport_ack_timeout": 0, 00:31:06.125 "ctrlr_loss_timeout_sec": 0, 00:31:06.125 "reconnect_delay_sec": 0, 00:31:06.125 "fast_io_fail_timeout_sec": 0, 00:31:06.125 "disable_auto_failback": false, 00:31:06.125 "generate_uuids": false, 00:31:06.125 "transport_tos": 0, 00:31:06.125 "nvme_error_stat": false, 00:31:06.125 "rdma_srq_size": 0, 00:31:06.125 "io_path_stat": false, 00:31:06.125 "allow_accel_sequence": false, 00:31:06.125 "rdma_max_cq_size": 0, 00:31:06.125 "rdma_cm_event_timeout_ms": 0, 00:31:06.125 "dhchap_digests": [ 00:31:06.125 "sha256", 00:31:06.125 "sha384", 00:31:06.125 "sha512" 00:31:06.125 ], 00:31:06.125 "dhchap_dhgroups": [ 00:31:06.125 "null", 00:31:06.125 "ffdhe2048", 00:31:06.125 "ffdhe3072", 00:31:06.125 "ffdhe4096", 00:31:06.125 "ffdhe6144", 00:31:06.125 "ffdhe8192" 00:31:06.125 ] 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "bdev_nvme_attach_controller", 00:31:06.125 "params": { 00:31:06.125 "name": "TLSTEST", 00:31:06.125 "trtype": "TCP", 00:31:06.125 "adrfam": "IPv4", 00:31:06.125 "traddr": "10.0.0.2", 00:31:06.125 "trsvcid": "4420", 00:31:06.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.125 "prchk_reftag": false, 00:31:06.125 "prchk_guard": false, 00:31:06.125 "ctrlr_loss_timeout_sec": 0, 00:31:06.125 "reconnect_delay_sec": 0, 00:31:06.125 "fast_io_fail_timeout_sec": 0, 00:31:06.125 "psk": "/tmp/tmp.Lx0iZo1Cr9", 00:31:06.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.125 "hdgst": false, 00:31:06.125 "ddgst": false 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "bdev_nvme_set_hotplug", 00:31:06.125 "params": { 00:31:06.125 "period_us": 100000, 00:31:06.125 "enable": false 00:31:06.125 } 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "method": "bdev_wait_for_examine" 00:31:06.125 } 00:31:06.125 ] 00:31:06.125 }, 00:31:06.125 { 00:31:06.125 "subsystem": "nbd", 00:31:06.125 "config": [] 00:31:06.125 } 00:31:06.125 ] 00:31:06.125 }' 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944669 ']' 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:06.125 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:06.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:06.126 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:06.126 23:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:06.126 [2024-07-22 23:11:42.303444] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:06.126 [2024-07-22 23:11:42.303616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944669 ] 00:31:06.126 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.126 [2024-07-22 23:11:42.416319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.383 [2024-07-22 23:11:42.526950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.640 [2024-07-22 23:11:42.716794] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:06.640 [2024-07-22 23:11:42.716952] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:07.571 23:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:07.571 23:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:07.571 23:11:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:07.571 Running I/O for 10 seconds... 00:31:17.674 00:31:17.674 Latency(us) 00:31:17.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.674 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:17.674 Verification LBA range: start 0x0 length 0x2000 00:31:17.674 TLSTESTn1 : 10.03 2554.66 9.98 0.00 0.00 50004.98 10437.21 47768.46 00:31:17.674 =================================================================================================================== 00:31:17.674 Total : 2554.66 9.98 0.00 0.00 50004.98 10437.21 47768.46 00:31:17.674 0 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 944669 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944669 ']' 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944669 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944669 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944669' 00:31:17.674 killing process with pid 944669 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944669 00:31:17.674 Received shutdown signal, test time was about 10.000000 seconds 00:31:17.674 00:31:17.674 Latency(us) 00:31:17.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.674 =================================================================================================================== 00:31:17.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:17.674 [2024-07-22 23:11:53.918962] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:17.674 23:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944669 00:31:17.933 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 944516 00:31:17.933 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944516 ']' 00:31:17.933 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944516 00:31:17.933 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:17.933 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:17.933 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944516 00:31:18.193 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:18.193 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:18.193 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944516' 00:31:18.193 killing process with pid 944516 00:31:18.193 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944516 00:31:18.193 [2024-07-22 23:11:54.253833] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:18.193 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944516 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=945994 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 945994 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 945994 ']' 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:18.453 23:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:18.453 [2024-07-22 23:11:54.635741] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:18.453 [2024-07-22 23:11:54.635882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.453 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.453 [2024-07-22 23:11:54.754970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.713 [2024-07-22 23:11:54.885478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.713 [2024-07-22 23:11:54.885548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.713 [2024-07-22 23:11:54.885569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.713 [2024-07-22 23:11:54.885604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.713 [2024-07-22 23:11:54.885624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.713 [2024-07-22 23:11:54.885671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Lx0iZo1Cr9 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lx0iZo1Cr9 00:31:18.973 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:19.542 [2024-07-22 23:11:55.688404] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.542 23:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:19.800 23:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:20.058 [2024-07-22 23:11:56.290760] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:20.058 [2024-07-22 23:11:56.291171] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.058 23:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:20.316 malloc0 00:31:20.316 23:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:20.884 23:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lx0iZo1Cr9 00:31:20.884 [2024-07-22 23:11:57.182479] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=946278 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 946278 /var/tmp/bdevperf.sock 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 946278 ']' 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:21.144 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:21.145 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:21.145 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:21.145 [2024-07-22 23:11:57.252832] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:21.145 [2024-07-22 23:11:57.252924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946278 ] 00:31:21.145 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.145 [2024-07-22 23:11:57.329401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.145 [2024-07-22 23:11:57.439884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.403 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:21.403 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:21.403 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Lx0iZo1Cr9 00:31:21.662 23:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:31:21.922 [2024-07-22 23:11:58.224958] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:22.182 nvme0n1 00:31:22.182 23:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:22.182 Running I/O for 1 seconds... 00:31:23.561 00:31:23.561 Latency(us) 00:31:23.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.561 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:23.561 Verification LBA range: start 0x0 length 0x2000 00:31:23.561 nvme0n1 : 1.02 2598.87 10.15 0.00 0.00 48646.45 8543.95 37671.06 00:31:23.561 =================================================================================================================== 00:31:23.561 Total : 2598.87 10.15 0.00 0.00 48646.45 8543.95 37671.06 00:31:23.561 0 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 946278 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 946278 ']' 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 946278 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946278 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946278' 00:31:23.561 killing process with pid 946278 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 946278 00:31:23.561 Received shutdown signal, test time was about 1.000000 seconds 00:31:23.561 00:31:23.561 Latency(us) 00:31:23.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.561 =================================================================================================================== 00:31:23.561 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 946278 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 945994 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 945994 ']' 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 945994 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 945994 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 945994' 00:31:23.561 killing process with pid 945994 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 945994 00:31:23.561 [2024-07-22 23:11:59.867833] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:23.561 23:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 945994 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=946584 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 946584 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 946584 ']' 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.132 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:24.132 [2024-07-22 23:12:00.337251] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:24.132 [2024-07-22 23:12:00.337401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.132 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.392 [2024-07-22 23:12:00.481430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.392 [2024-07-22 23:12:00.638906] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.392 [2024-07-22 23:12:00.639018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.393 [2024-07-22 23:12:00.639055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.393 [2024-07-22 23:12:00.639091] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.393 [2024-07-22 23:12:00.639118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.393 [2024-07-22 23:12:00.639192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.652 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:24.652 [2024-07-22 23:12:00.884303] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.652 malloc0 00:31:24.652 [2024-07-22 23:12:00.925492] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:24.652 [2024-07-22 23:12:00.940636] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=946749 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 946749 /var/tmp/bdevperf.sock 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 946749 ']' 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:24.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.910 23:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:24.910 [2024-07-22 23:12:01.048701] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:24.910 [2024-07-22 23:12:01.048859] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946749 ] 00:31:24.910 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.910 [2024-07-22 23:12:01.145004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.169 [2024-07-22 23:12:01.252465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.109 23:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:26.109 23:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:26.109 23:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Lx0iZo1Cr9 00:31:26.368 23:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:31:26.936 [2024-07-22 23:12:02.940139] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:26.936 nvme0n1 00:31:26.936 23:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:27.197 Running I/O for 1 seconds... 00:31:28.137 00:31:28.137 Latency(us) 00:31:28.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.137 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:28.137 Verification LBA range: start 0x0 length 0x2000 00:31:28.137 nvme0n1 : 1.03 2512.02 9.81 0.00 0.00 50260.55 7912.87 61361.11 00:31:28.137 =================================================================================================================== 00:31:28.137 Total : 2512.02 9.81 0.00 0.00 50260.55 7912.87 61361.11 00:31:28.137 0 00:31:28.137 23:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:31:28.137 23:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.137 23:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:28.398 23:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.398 23:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:31:28.398 "subsystems": [ 00:31:28.398 { 00:31:28.398 "subsystem": "keyring", 00:31:28.398 "config": [ 00:31:28.398 { 00:31:28.398 "method": "keyring_file_add_key", 00:31:28.398 "params": { 00:31:28.398 "name": "key0", 00:31:28.398 "path": "/tmp/tmp.Lx0iZo1Cr9" 00:31:28.398 } 00:31:28.398 } 00:31:28.398 ] 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "subsystem": "iobuf", 00:31:28.398 "config": [ 00:31:28.398 { 00:31:28.398 "method": "iobuf_set_options", 00:31:28.398 "params": { 00:31:28.398 "small_pool_count": 8192, 00:31:28.398 "large_pool_count": 1024, 00:31:28.398 "small_bufsize": 8192, 00:31:28.398 "large_bufsize": 135168 00:31:28.398 } 00:31:28.398 } 00:31:28.398 ] 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "subsystem": "sock", 00:31:28.398 "config": [ 00:31:28.398 { 00:31:28.398 "method": "sock_set_default_impl", 00:31:28.398 "params": { 00:31:28.398 "impl_name": "posix" 00:31:28.398 } 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "method": "sock_impl_set_options", 00:31:28.398 "params": { 00:31:28.398 "impl_name": "ssl", 00:31:28.398 "recv_buf_size": 4096, 00:31:28.398 "send_buf_size": 4096, 00:31:28.398 "enable_recv_pipe": true, 00:31:28.398 "enable_quickack": false, 00:31:28.398 "enable_placement_id": 0, 00:31:28.398 "enable_zerocopy_send_server": true, 00:31:28.398 "enable_zerocopy_send_client": false, 00:31:28.398 "zerocopy_threshold": 0, 00:31:28.398 "tls_version": 0, 00:31:28.398 "enable_ktls": false 00:31:28.398 } 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "method": "sock_impl_set_options", 00:31:28.398 "params": { 00:31:28.398 "impl_name": "posix", 00:31:28.398 "recv_buf_size": 2097152, 00:31:28.398 "send_buf_size": 2097152, 00:31:28.398 "enable_recv_pipe": true, 00:31:28.398 "enable_quickack": false, 00:31:28.398 "enable_placement_id": 0, 00:31:28.398 "enable_zerocopy_send_server": true, 00:31:28.398 "enable_zerocopy_send_client": false, 00:31:28.398 "zerocopy_threshold": 0, 00:31:28.398 "tls_version": 0, 00:31:28.398 "enable_ktls": false 00:31:28.398 } 00:31:28.398 } 00:31:28.398 ] 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "subsystem": "vmd", 00:31:28.398 "config": [] 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "subsystem": "accel", 00:31:28.398 "config": [ 00:31:28.398 { 00:31:28.398 "method": "accel_set_options", 00:31:28.398 "params": { 00:31:28.398 "small_cache_size": 128, 00:31:28.398 "large_cache_size": 16, 00:31:28.398 "task_count": 2048, 00:31:28.398 "sequence_count": 2048, 00:31:28.398 "buf_count": 2048 00:31:28.398 } 00:31:28.398 } 00:31:28.398 ] 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "subsystem": "bdev", 00:31:28.398 "config": [ 00:31:28.398 { 00:31:28.398 "method": "bdev_set_options", 00:31:28.398 "params": { 00:31:28.398 "bdev_io_pool_size": 65535, 00:31:28.398 "bdev_io_cache_size": 256, 00:31:28.398 "bdev_auto_examine": true, 00:31:28.398 "iobuf_small_cache_size": 128, 00:31:28.398 "iobuf_large_cache_size": 16 00:31:28.398 } 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "method": "bdev_raid_set_options", 00:31:28.398 "params": { 00:31:28.398 "process_window_size_kb": 1024, 00:31:28.398 "process_max_bandwidth_mb_sec": 0 00:31:28.398 } 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "method": "bdev_iscsi_set_options", 00:31:28.398 "params": { 00:31:28.398 "timeout_sec": 30 00:31:28.398 } 00:31:28.398 }, 00:31:28.398 { 00:31:28.398 "method": "bdev_nvme_set_options", 00:31:28.398 "params": { 00:31:28.398 "action_on_timeout": "none", 00:31:28.398 "timeout_us": 0, 00:31:28.398 "timeout_admin_us": 0, 00:31:28.398 "keep_alive_timeout_ms": 10000, 00:31:28.398 "arbitration_burst": 0, 00:31:28.398 "low_priority_weight": 0, 00:31:28.398 "medium_priority_weight": 0, 00:31:28.398 "high_priority_weight": 0, 00:31:28.398 "nvme_adminq_poll_period_us": 10000, 00:31:28.398 "nvme_ioq_poll_period_us": 0, 00:31:28.398 "io_queue_requests": 0, 00:31:28.398 "delay_cmd_submit": true, 00:31:28.398 "transport_retry_count": 4, 00:31:28.398 "bdev_retry_count": 3, 00:31:28.398 "transport_ack_timeout": 0, 00:31:28.398 "ctrlr_loss_timeout_sec": 0, 00:31:28.398 "reconnect_delay_sec": 0, 00:31:28.398 "fast_io_fail_timeout_sec": 0, 00:31:28.398 "disable_auto_failback": false, 00:31:28.398 "generate_uuids": false, 00:31:28.398 "transport_tos": 0, 00:31:28.398 "nvme_error_stat": false, 00:31:28.398 "rdma_srq_size": 0, 00:31:28.399 "io_path_stat": false, 00:31:28.399 "allow_accel_sequence": false, 00:31:28.399 "rdma_max_cq_size": 0, 00:31:28.399 "rdma_cm_event_timeout_ms": 0, 00:31:28.399 "dhchap_digests": [ 00:31:28.399 "sha256", 00:31:28.399 "sha384", 00:31:28.399 "sha512" 00:31:28.399 ], 00:31:28.399 "dhchap_dhgroups": [ 00:31:28.399 "null", 00:31:28.399 "ffdhe2048", 00:31:28.399 "ffdhe3072", 00:31:28.399 "ffdhe4096", 00:31:28.399 "ffdhe6144", 00:31:28.399 "ffdhe8192" 00:31:28.399 ] 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "bdev_nvme_set_hotplug", 00:31:28.399 "params": { 00:31:28.399 "period_us": 100000, 00:31:28.399 "enable": false 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "bdev_malloc_create", 00:31:28.399 "params": { 00:31:28.399 "name": "malloc0", 00:31:28.399 "num_blocks": 8192, 00:31:28.399 "block_size": 4096, 00:31:28.399 "physical_block_size": 4096, 00:31:28.399 "uuid": "914fe4ea-7003-452d-a5a7-01935d09b63e", 00:31:28.399 "optimal_io_boundary": 0, 00:31:28.399 "md_size": 0, 00:31:28.399 "dif_type": 0, 00:31:28.399 "dif_is_head_of_md": false, 00:31:28.399 "dif_pi_format": 0 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "bdev_wait_for_examine" 00:31:28.399 } 00:31:28.399 ] 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "subsystem": "nbd", 00:31:28.399 "config": [] 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "subsystem": "scheduler", 00:31:28.399 "config": [ 00:31:28.399 { 00:31:28.399 "method": "framework_set_scheduler", 00:31:28.399 "params": { 00:31:28.399 "name": "static" 00:31:28.399 } 00:31:28.399 } 00:31:28.399 ] 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "subsystem": "nvmf", 00:31:28.399 "config": [ 00:31:28.399 { 00:31:28.399 "method": "nvmf_set_config", 00:31:28.399 "params": { 00:31:28.399 "discovery_filter": "match_any", 00:31:28.399 "admin_cmd_passthru": { 00:31:28.399 "identify_ctrlr": false 00:31:28.399 } 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "nvmf_set_max_subsystems", 00:31:28.399 "params": { 00:31:28.399 "max_subsystems": 1024 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "nvmf_set_crdt", 00:31:28.399 "params": { 00:31:28.399 "crdt1": 0, 00:31:28.399 "crdt2": 0, 00:31:28.399 "crdt3": 0 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "nvmf_create_transport", 00:31:28.399 "params": { 00:31:28.399 "trtype": "TCP", 00:31:28.399 "max_queue_depth": 128, 00:31:28.399 "max_io_qpairs_per_ctrlr": 127, 00:31:28.399 "in_capsule_data_size": 4096, 00:31:28.399 "max_io_size": 131072, 00:31:28.399 "io_unit_size": 131072, 00:31:28.399 "max_aq_depth": 128, 00:31:28.399 "num_shared_buffers": 511, 00:31:28.399 "buf_cache_size": 4294967295, 00:31:28.399 "dif_insert_or_strip": false, 00:31:28.399 "zcopy": false, 00:31:28.399 "c2h_success": false, 00:31:28.399 "sock_priority": 0, 00:31:28.399 "abort_timeout_sec": 1, 00:31:28.399 "ack_timeout": 0, 00:31:28.399 "data_wr_pool_size": 0 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "nvmf_create_subsystem", 00:31:28.399 "params": { 00:31:28.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.399 "allow_any_host": false, 00:31:28.399 "serial_number": "00000000000000000000", 00:31:28.399 "model_number": "SPDK bdev Controller", 00:31:28.399 "max_namespaces": 32, 00:31:28.399 "min_cntlid": 1, 00:31:28.399 "max_cntlid": 65519, 00:31:28.399 "ana_reporting": false 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "nvmf_subsystem_add_host", 00:31:28.399 "params": { 00:31:28.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.399 "host": "nqn.2016-06.io.spdk:host1", 00:31:28.399 "psk": "key0" 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "nvmf_subsystem_add_ns", 00:31:28.399 "params": { 00:31:28.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.399 "namespace": { 00:31:28.399 "nsid": 1, 00:31:28.399 "bdev_name": "malloc0", 00:31:28.399 "nguid": "914FE4EA7003452DA5A701935D09B63E", 00:31:28.399 "uuid": "914fe4ea-7003-452d-a5a7-01935d09b63e", 00:31:28.399 "no_auto_visible": false 00:31:28.399 } 00:31:28.399 } 00:31:28.399 }, 00:31:28.399 { 00:31:28.399 "method": "nvmf_subsystem_add_listener", 00:31:28.399 "params": { 00:31:28.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.399 "listen_address": { 00:31:28.399 "trtype": "TCP", 00:31:28.399 "adrfam": "IPv4", 00:31:28.399 "traddr": "10.0.0.2", 00:31:28.399 "trsvcid": "4420" 00:31:28.399 }, 00:31:28.399 "secure_channel": false, 00:31:28.399 "sock_impl": "ssl" 00:31:28.399 } 00:31:28.399 } 00:31:28.399 ] 00:31:28.399 } 00:31:28.399 ] 00:31:28.399 }' 00:31:28.399 23:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:31:28.969 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:31:28.969 "subsystems": [ 00:31:28.969 { 00:31:28.969 "subsystem": "keyring", 00:31:28.970 "config": [ 00:31:28.970 { 00:31:28.970 "method": "keyring_file_add_key", 00:31:28.970 "params": { 00:31:28.970 "name": "key0", 00:31:28.970 "path": "/tmp/tmp.Lx0iZo1Cr9" 00:31:28.970 } 00:31:28.970 } 00:31:28.970 ] 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "subsystem": "iobuf", 00:31:28.970 "config": [ 00:31:28.970 { 00:31:28.970 "method": "iobuf_set_options", 00:31:28.970 "params": { 00:31:28.970 "small_pool_count": 8192, 00:31:28.970 "large_pool_count": 1024, 00:31:28.970 "small_bufsize": 8192, 00:31:28.970 "large_bufsize": 135168 00:31:28.970 } 00:31:28.970 } 00:31:28.970 ] 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "subsystem": "sock", 00:31:28.970 "config": [ 00:31:28.970 { 00:31:28.970 "method": "sock_set_default_impl", 00:31:28.970 "params": { 00:31:28.970 "impl_name": "posix" 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "sock_impl_set_options", 00:31:28.970 "params": { 00:31:28.970 "impl_name": "ssl", 00:31:28.970 "recv_buf_size": 4096, 00:31:28.970 "send_buf_size": 4096, 00:31:28.970 "enable_recv_pipe": true, 00:31:28.970 "enable_quickack": false, 00:31:28.970 "enable_placement_id": 0, 00:31:28.970 "enable_zerocopy_send_server": true, 00:31:28.970 "enable_zerocopy_send_client": false, 00:31:28.970 "zerocopy_threshold": 0, 00:31:28.970 "tls_version": 0, 00:31:28.970 "enable_ktls": false 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "sock_impl_set_options", 00:31:28.970 "params": { 00:31:28.970 "impl_name": "posix", 00:31:28.970 "recv_buf_size": 2097152, 00:31:28.970 "send_buf_size": 2097152, 00:31:28.970 "enable_recv_pipe": true, 00:31:28.970 "enable_quickack": false, 00:31:28.970 "enable_placement_id": 0, 00:31:28.970 "enable_zerocopy_send_server": true, 00:31:28.970 "enable_zerocopy_send_client": false, 00:31:28.970 "zerocopy_threshold": 0, 00:31:28.970 "tls_version": 0, 00:31:28.970 "enable_ktls": false 00:31:28.970 } 00:31:28.970 } 00:31:28.970 ] 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "subsystem": "vmd", 00:31:28.970 "config": [] 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "subsystem": "accel", 00:31:28.970 "config": [ 00:31:28.970 { 00:31:28.970 "method": "accel_set_options", 00:31:28.970 "params": { 00:31:28.970 "small_cache_size": 128, 00:31:28.970 "large_cache_size": 16, 00:31:28.970 "task_count": 2048, 00:31:28.970 "sequence_count": 2048, 00:31:28.970 "buf_count": 2048 00:31:28.970 } 00:31:28.970 } 00:31:28.970 ] 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "subsystem": "bdev", 00:31:28.970 "config": [ 00:31:28.970 { 00:31:28.970 "method": "bdev_set_options", 00:31:28.970 "params": { 00:31:28.970 "bdev_io_pool_size": 65535, 00:31:28.970 "bdev_io_cache_size": 256, 00:31:28.970 "bdev_auto_examine": true, 00:31:28.970 "iobuf_small_cache_size": 128, 00:31:28.970 "iobuf_large_cache_size": 16 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "bdev_raid_set_options", 00:31:28.970 "params": { 00:31:28.970 "process_window_size_kb": 1024, 00:31:28.970 "process_max_bandwidth_mb_sec": 0 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "bdev_iscsi_set_options", 00:31:28.970 "params": { 00:31:28.970 "timeout_sec": 30 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "bdev_nvme_set_options", 00:31:28.970 "params": { 00:31:28.970 "action_on_timeout": "none", 00:31:28.970 "timeout_us": 0, 00:31:28.970 "timeout_admin_us": 0, 00:31:28.970 "keep_alive_timeout_ms": 10000, 00:31:28.970 "arbitration_burst": 0, 00:31:28.970 "low_priority_weight": 0, 00:31:28.970 "medium_priority_weight": 0, 00:31:28.970 "high_priority_weight": 0, 00:31:28.970 "nvme_adminq_poll_period_us": 10000, 00:31:28.970 "nvme_ioq_poll_period_us": 0, 00:31:28.970 "io_queue_requests": 512, 00:31:28.970 "delay_cmd_submit": true, 00:31:28.970 "transport_retry_count": 4, 00:31:28.970 "bdev_retry_count": 3, 00:31:28.970 "transport_ack_timeout": 0, 00:31:28.970 "ctrlr_loss_timeout_sec": 0, 00:31:28.970 "reconnect_delay_sec": 0, 00:31:28.970 "fast_io_fail_timeout_sec": 0, 00:31:28.970 "disable_auto_failback": false, 00:31:28.970 "generate_uuids": false, 00:31:28.970 "transport_tos": 0, 00:31:28.970 "nvme_error_stat": false, 00:31:28.970 "rdma_srq_size": 0, 00:31:28.970 "io_path_stat": false, 00:31:28.970 "allow_accel_sequence": false, 00:31:28.970 "rdma_max_cq_size": 0, 00:31:28.970 "rdma_cm_event_timeout_ms": 0, 00:31:28.970 "dhchap_digests": [ 00:31:28.970 "sha256", 00:31:28.970 "sha384", 00:31:28.970 "sha512" 00:31:28.970 ], 00:31:28.970 "dhchap_dhgroups": [ 00:31:28.970 "null", 00:31:28.970 "ffdhe2048", 00:31:28.970 "ffdhe3072", 00:31:28.970 "ffdhe4096", 00:31:28.970 "ffdhe6144", 00:31:28.970 "ffdhe8192" 00:31:28.970 ] 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "bdev_nvme_attach_controller", 00:31:28.970 "params": { 00:31:28.970 "name": "nvme0", 00:31:28.970 "trtype": "TCP", 00:31:28.970 "adrfam": "IPv4", 00:31:28.970 "traddr": "10.0.0.2", 00:31:28.970 "trsvcid": "4420", 00:31:28.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.970 "prchk_reftag": false, 00:31:28.970 "prchk_guard": false, 00:31:28.970 "ctrlr_loss_timeout_sec": 0, 00:31:28.970 "reconnect_delay_sec": 0, 00:31:28.970 "fast_io_fail_timeout_sec": 0, 00:31:28.970 "psk": "key0", 00:31:28.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.970 "hdgst": false, 00:31:28.970 "ddgst": false 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "bdev_nvme_set_hotplug", 00:31:28.970 "params": { 00:31:28.970 "period_us": 100000, 00:31:28.970 "enable": false 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "bdev_enable_histogram", 00:31:28.970 "params": { 00:31:28.970 "name": "nvme0n1", 00:31:28.970 "enable": true 00:31:28.970 } 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "method": "bdev_wait_for_examine" 00:31:28.970 } 00:31:28.970 ] 00:31:28.970 }, 00:31:28.970 { 00:31:28.970 "subsystem": "nbd", 00:31:28.970 "config": [] 00:31:28.970 } 00:31:28.970 ] 00:31:28.970 }' 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 946749 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 946749 ']' 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 946749 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946749 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946749' 00:31:28.970 killing process with pid 946749 00:31:28.970 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 946749 00:31:28.970 Received shutdown signal, test time was about 1.000000 seconds 00:31:28.970 00:31:28.970 Latency(us) 00:31:28.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.970 =================================================================================================================== 00:31:28.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.971 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 946749 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 946584 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 946584 ']' 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 946584 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946584 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946584' 00:31:29.231 killing process with pid 946584 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 946584 00:31:29.231 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 946584 00:31:29.802 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:31:29.802 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:29.802 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:31:29.802 "subsystems": [ 00:31:29.802 { 00:31:29.802 "subsystem": "keyring", 00:31:29.802 "config": [ 00:31:29.802 { 00:31:29.802 "method": "keyring_file_add_key", 00:31:29.802 "params": { 00:31:29.802 "name": "key0", 00:31:29.802 "path": "/tmp/tmp.Lx0iZo1Cr9" 00:31:29.802 } 00:31:29.802 } 00:31:29.802 ] 00:31:29.802 }, 00:31:29.802 { 00:31:29.802 "subsystem": "iobuf", 00:31:29.802 "config": [ 00:31:29.802 { 00:31:29.802 "method": "iobuf_set_options", 00:31:29.802 "params": { 00:31:29.802 "small_pool_count": 8192, 00:31:29.802 "large_pool_count": 1024, 00:31:29.802 "small_bufsize": 8192, 00:31:29.802 "large_bufsize": 135168 00:31:29.802 } 00:31:29.802 } 00:31:29.802 ] 00:31:29.802 }, 00:31:29.802 { 00:31:29.802 "subsystem": "sock", 00:31:29.802 "config": [ 00:31:29.802 { 00:31:29.802 "method": "sock_set_default_impl", 00:31:29.802 "params": { 00:31:29.802 "impl_name": "posix" 00:31:29.802 } 00:31:29.802 }, 00:31:29.802 { 00:31:29.802 "method": "sock_impl_set_options", 00:31:29.802 "params": { 00:31:29.802 "impl_name": "ssl", 00:31:29.802 "recv_buf_size": 4096, 00:31:29.802 "send_buf_size": 4096, 00:31:29.802 "enable_recv_pipe": true, 00:31:29.802 "enable_quickack": false, 00:31:29.802 "enable_placement_id": 0, 00:31:29.802 "enable_zerocopy_send_server": true, 00:31:29.803 "enable_zerocopy_send_client": false, 00:31:29.803 "zerocopy_threshold": 0, 00:31:29.803 "tls_version": 0, 00:31:29.803 "enable_ktls": false 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "sock_impl_set_options", 00:31:29.803 "params": { 00:31:29.803 "impl_name": "posix", 00:31:29.803 "recv_buf_size": 2097152, 00:31:29.803 "send_buf_size": 2097152, 00:31:29.803 "enable_recv_pipe": true, 00:31:29.803 "enable_quickack": false, 00:31:29.803 "enable_placement_id": 0, 00:31:29.803 "enable_zerocopy_send_server": true, 00:31:29.803 "enable_zerocopy_send_client": false, 00:31:29.803 "zerocopy_threshold": 0, 00:31:29.803 "tls_version": 0, 00:31:29.803 "enable_ktls": false 00:31:29.803 } 00:31:29.803 } 00:31:29.803 ] 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "subsystem": "vmd", 00:31:29.803 "config": [] 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "subsystem": "accel", 00:31:29.803 "config": [ 00:31:29.803 { 00:31:29.803 "method": "accel_set_options", 00:31:29.803 "params": { 00:31:29.803 "small_cache_size": 128, 00:31:29.803 "large_cache_size": 16, 00:31:29.803 "task_count": 2048, 00:31:29.803 "sequence_count": 2048, 00:31:29.803 "buf_count": 2048 00:31:29.803 } 00:31:29.803 } 00:31:29.803 ] 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "subsystem": "bdev", 00:31:29.803 "config": [ 00:31:29.803 { 00:31:29.803 "method": "bdev_set_options", 00:31:29.803 "params": { 00:31:29.803 "bdev_io_pool_size": 65535, 00:31:29.803 "bdev_io_cache_size": 256, 00:31:29.803 "bdev_auto_examine": true, 00:31:29.803 "iobuf_small_cache_size": 128, 00:31:29.803 "iobuf_large_cache_size": 16 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "bdev_raid_set_options", 00:31:29.803 "params": { 00:31:29.803 "process_window_size_kb": 1024, 00:31:29.803 "process_max_bandwidth_mb_sec": 0 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "bdev_iscsi_set_options", 00:31:29.803 "params": { 00:31:29.803 "timeout_sec": 30 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "bdev_nvme_set_options", 00:31:29.803 "params": { 00:31:29.803 "action_on_timeout": "none", 00:31:29.803 "timeout_us": 0, 00:31:29.803 "timeout_admin_us": 0, 00:31:29.803 "keep_alive_timeout_ms": 10000, 00:31:29.803 "arbitration_burst": 0, 00:31:29.803 "low_priority_weight": 0, 00:31:29.803 "medium_priority_weight": 0, 00:31:29.803 "high_priority_weight": 0, 00:31:29.803 "nvme_adminq_poll_period_us": 10000, 00:31:29.803 "nvme_ioq_poll_period_us": 0, 00:31:29.803 "io_queue_requests": 0, 00:31:29.803 "delay_cmd_submit": true, 00:31:29.803 "transport_retry_count": 4, 00:31:29.803 "bdev_retry_count": 3, 00:31:29.803 "transport_ack_timeout": 0, 00:31:29.803 "ctrlr_loss_timeout_sec": 0, 00:31:29.803 "reconnect_delay_sec": 0, 00:31:29.803 "fast_io_fail_timeout_sec": 0, 00:31:29.803 "disable_auto_failback": false, 00:31:29.803 "generate_uuids": false, 00:31:29.803 "transport_tos": 0, 00:31:29.803 "nvme_error_stat": false, 00:31:29.803 "rdma_srq_size": 0, 00:31:29.803 "io_path_stat": false, 00:31:29.803 "allow_accel_sequence": false, 00:31:29.803 "rdma_max_cq_size": 0, 00:31:29.803 "rdma_cm_event_timeout_ms": 0, 00:31:29.803 "dhchap_digests": [ 00:31:29.803 "sha256", 00:31:29.803 "sha384", 00:31:29.803 "sha512" 00:31:29.803 ], 00:31:29.803 "dhchap_dhgroups": [ 00:31:29.803 "null", 00:31:29.803 "ffdhe2048", 00:31:29.803 "ffdhe3072", 00:31:29.803 "ffdhe4096", 00:31:29.803 "ffdhe6144", 00:31:29.803 "ffdhe8192" 00:31:29.803 ] 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "bdev_nvme_set_hotplug", 00:31:29.803 "params": { 00:31:29.803 "period_us": 100000, 00:31:29.803 "enable": false 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "bdev_malloc_create", 00:31:29.803 "params": { 00:31:29.803 "name": "malloc0", 00:31:29.803 "num_blocks": 8192, 00:31:29.803 "block_size": 4096, 00:31:29.803 "physical_block_size": 4096, 00:31:29.803 "uuid": "914fe4ea-7003-452d-a5a7-01935d09b63e", 00:31:29.803 "optimal_io_boundary": 0, 00:31:29.803 "md_size": 0, 00:31:29.803 "dif_type": 0, 00:31:29.803 "dif_is_head_of_md": false, 00:31:29.803 "dif_pi_format": 0 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "bdev_wait_for_examine" 00:31:29.803 } 00:31:29.803 ] 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "subsystem": "nbd", 00:31:29.803 "config": [] 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "subsystem": "scheduler", 00:31:29.803 "config": [ 00:31:29.803 { 00:31:29.803 "method": "framework_set_scheduler", 00:31:29.803 "params": { 00:31:29.803 "name": "static" 00:31:29.803 } 00:31:29.803 } 00:31:29.803 ] 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "subsystem": "nvmf", 00:31:29.803 "config": [ 00:31:29.803 { 00:31:29.803 "method": "nvmf_set_config", 00:31:29.803 "params": { 00:31:29.803 "discovery_filter": "match_any", 00:31:29.803 "admin_cmd_passthru": { 00:31:29.803 "identify_ctrlr": false 00:31:29.803 } 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "nvmf_set_max_subsystems", 00:31:29.803 "params": { 00:31:29.803 "max_subsystems": 1024 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "nvmf_set_crdt", 00:31:29.803 "params": { 00:31:29.803 "crdt1": 0, 00:31:29.803 "crdt2": 0, 00:31:29.803 "crdt3": 0 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "nvmf_create_transport", 00:31:29.803 "params": { 00:31:29.803 "trtype": "TCP", 00:31:29.803 "max_queue_depth": 128, 00:31:29.803 "max_io_qpairs_per_ctrlr": 127, 00:31:29.803 "in_capsule_data_size": 4096, 00:31:29.803 "max_io_size": 131072, 00:31:29.803 "io_unit_size": 131072, 00:31:29.803 "max_aq_depth": 128, 00:31:29.803 "num_shared_buffers": 511, 00:31:29.803 "buf_cache_size": 4294967295, 00:31:29.803 "dif_insert_or_strip": false, 00:31:29.803 "zcopy": false, 00:31:29.803 "c2h_success": false, 00:31:29.803 "sock_priority": 0, 00:31:29.803 "abort_timeout_sec": 1, 00:31:29.803 "ack_timeout": 0, 00:31:29.803 "data_wr_pool_size": 0 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "nvmf_create_subsystem", 00:31:29.803 "params": { 00:31:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:29.803 "allow_any_host": false, 00:31:29.803 "serial_number": "00000000000000000000", 00:31:29.803 "model_number": "SPDK bdev Controller", 00:31:29.803 "max_namespaces": 32, 00:31:29.803 "min_cntlid": 1, 00:31:29.803 "max_cntlid": 65519, 00:31:29.803 "ana_reporting": false 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "nvmf_subsystem_add_host", 00:31:29.803 "params": { 00:31:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:29.803 "host": "nqn.2016-06.io.spdk:host1", 00:31:29.803 "psk": "key0" 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "nvmf_subsystem_add_ns", 00:31:29.803 "params": { 00:31:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:29.803 "namespace": { 00:31:29.803 "nsid": 1, 00:31:29.803 "bdev_name": "malloc0", 00:31:29.803 "nguid": "914FE4EA7003452DA5A701935D09B63E", 00:31:29.803 "uuid": "914fe4ea-7003-452d-a5a7-01935d09b63e", 00:31:29.803 "no_auto_visible": false 00:31:29.803 } 00:31:29.803 } 00:31:29.803 }, 00:31:29.803 { 00:31:29.803 "method": "nvmf_subsystem_add_listener", 00:31:29.803 "params": { 00:31:29.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:29.803 "listen_address": { 00:31:29.803 "trtype": "TCP", 00:31:29.803 "adrfam": "IPv4", 00:31:29.803 "traddr": "10.0.0.2", 00:31:29.803 "trsvcid": "4420" 00:31:29.803 }, 00:31:29.803 "secure_channel": false, 00:31:29.803 "sock_impl": "ssl" 00:31:29.803 } 00:31:29.803 } 00:31:29.803 ] 00:31:29.803 } 00:31:29.803 ] 00:31:29.803 }' 00:31:29.803 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=947363 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 947363 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 947363 ']' 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:29.804 23:12:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:29.804 [2024-07-22 23:12:05.923616] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:29.804 [2024-07-22 23:12:05.923813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.804 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.804 [2024-07-22 23:12:06.076958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.063 [2024-07-22 23:12:06.232035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.063 [2024-07-22 23:12:06.232156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.063 [2024-07-22 23:12:06.232194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.063 [2024-07-22 23:12:06.232224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.063 [2024-07-22 23:12:06.232264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.063 [2024-07-22 23:12:06.232445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.322 [2024-07-22 23:12:06.554376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.322 [2024-07-22 23:12:06.596837] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:30.322 [2024-07-22 23:12:06.597305] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=947509 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 947509 /var/tmp/bdevperf.sock 00:31:30.583 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 947509 ']' 00:31:30.584 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:30.584 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:31:30.584 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:30.584 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:30.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:30.584 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:31:30.584 "subsystems": [ 00:31:30.584 { 00:31:30.584 "subsystem": "keyring", 00:31:30.584 "config": [ 00:31:30.584 { 00:31:30.584 "method": "keyring_file_add_key", 00:31:30.584 "params": { 00:31:30.584 "name": "key0", 00:31:30.584 "path": "/tmp/tmp.Lx0iZo1Cr9" 00:31:30.584 } 00:31:30.584 } 00:31:30.584 ] 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "subsystem": "iobuf", 00:31:30.584 "config": [ 00:31:30.584 { 00:31:30.584 "method": "iobuf_set_options", 00:31:30.584 "params": { 00:31:30.584 "small_pool_count": 8192, 00:31:30.584 "large_pool_count": 1024, 00:31:30.584 "small_bufsize": 8192, 00:31:30.584 "large_bufsize": 135168 00:31:30.584 } 00:31:30.584 } 00:31:30.584 ] 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "subsystem": "sock", 00:31:30.584 "config": [ 00:31:30.584 { 00:31:30.584 "method": "sock_set_default_impl", 00:31:30.584 "params": { 00:31:30.584 "impl_name": "posix" 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "sock_impl_set_options", 00:31:30.584 "params": { 00:31:30.584 "impl_name": "ssl", 00:31:30.584 "recv_buf_size": 4096, 00:31:30.584 "send_buf_size": 4096, 00:31:30.584 "enable_recv_pipe": true, 00:31:30.584 "enable_quickack": false, 00:31:30.584 "enable_placement_id": 0, 00:31:30.584 "enable_zerocopy_send_server": true, 00:31:30.584 "enable_zerocopy_send_client": false, 00:31:30.584 "zerocopy_threshold": 0, 00:31:30.584 "tls_version": 0, 00:31:30.584 "enable_ktls": false 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "sock_impl_set_options", 00:31:30.584 "params": { 00:31:30.584 "impl_name": "posix", 00:31:30.584 "recv_buf_size": 2097152, 00:31:30.584 "send_buf_size": 2097152, 00:31:30.584 "enable_recv_pipe": true, 00:31:30.584 "enable_quickack": false, 00:31:30.584 "enable_placement_id": 0, 00:31:30.584 "enable_zerocopy_send_server": true, 00:31:30.584 "enable_zerocopy_send_client": false, 00:31:30.584 "zerocopy_threshold": 0, 00:31:30.584 "tls_version": 0, 00:31:30.584 "enable_ktls": false 00:31:30.584 } 00:31:30.584 } 00:31:30.584 ] 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "subsystem": "vmd", 00:31:30.584 "config": [] 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "subsystem": "accel", 00:31:30.584 "config": [ 00:31:30.584 { 00:31:30.584 "method": "accel_set_options", 00:31:30.584 "params": { 00:31:30.584 "small_cache_size": 128, 00:31:30.584 "large_cache_size": 16, 00:31:30.584 "task_count": 2048, 00:31:30.584 "sequence_count": 2048, 00:31:30.584 "buf_count": 2048 00:31:30.584 } 00:31:30.584 } 00:31:30.584 ] 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "subsystem": "bdev", 00:31:30.584 "config": [ 00:31:30.584 { 00:31:30.584 "method": "bdev_set_options", 00:31:30.584 "params": { 00:31:30.584 "bdev_io_pool_size": 65535, 00:31:30.584 "bdev_io_cache_size": 256, 00:31:30.584 "bdev_auto_examine": true, 00:31:30.584 "iobuf_small_cache_size": 128, 00:31:30.584 "iobuf_large_cache_size": 16 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "bdev_raid_set_options", 00:31:30.584 "params": { 00:31:30.584 "process_window_size_kb": 1024, 00:31:30.584 "process_max_bandwidth_mb_sec": 0 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "bdev_iscsi_set_options", 00:31:30.584 "params": { 00:31:30.584 "timeout_sec": 30 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "bdev_nvme_set_options", 00:31:30.584 "params": { 00:31:30.584 "action_on_timeout": "none", 00:31:30.584 "timeout_us": 0, 00:31:30.584 "timeout_admin_us": 0, 00:31:30.584 "keep_alive_timeout_ms": 10000, 00:31:30.584 "arbitration_burst": 0, 00:31:30.584 "low_priority_weight": 0, 00:31:30.584 "medium_priority_weight": 0, 00:31:30.584 "high_priority_weight": 0, 00:31:30.584 "nvme_adminq_poll_period_us": 10000, 00:31:30.584 "nvme_ioq_poll_period_us": 0, 00:31:30.584 "io_queue_requests": 512, 00:31:30.584 "delay_cmd_submit": true, 00:31:30.584 "transport_retry_count": 4, 00:31:30.584 "bdev_retry_count": 3, 00:31:30.584 "transport_ack_timeout": 0, 00:31:30.584 "ctrlr_loss_timeout_sec": 0, 00:31:30.584 "reconnect_delay_sec": 0, 00:31:30.584 "fast_io_fail_timeout_sec": 0, 00:31:30.584 "disable_auto_failback": false, 00:31:30.584 "generate_uuids": false, 00:31:30.584 "transport_tos": 0, 00:31:30.584 "nvme_error_stat": false, 00:31:30.584 "rdma_srq_size": 0, 00:31:30.584 "io_path_stat": false, 00:31:30.584 "allow_accel_sequence": false, 00:31:30.584 "rdma_max_cq_size": 0, 00:31:30.584 "rdma_cm_event_timeout_ms": 0, 00:31:30.584 "dhchap_digests": [ 00:31:30.584 "sha256", 00:31:30.584 "sha384", 00:31:30.584 "sha512" 00:31:30.584 ], 00:31:30.584 "dhchap_dhgroups": [ 00:31:30.584 "null", 00:31:30.584 "ffdhe2048", 00:31:30.584 "ffdhe3072", 00:31:30.584 "ffdhe4096", 00:31:30.584 "ffdhe6144", 00:31:30.584 "ffdhe8192" 00:31:30.584 ] 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "bdev_nvme_attach_controller", 00:31:30.584 "params": { 00:31:30.584 "name": "nvme0", 00:31:30.584 "trtype": "TCP", 00:31:30.584 "adrfam": "IPv4", 00:31:30.584 "traddr": "10.0.0.2", 00:31:30.584 "trsvcid": "4420", 00:31:30.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.584 "prchk_reftag": false, 00:31:30.584 "prchk_guard": false, 00:31:30.584 "ctrlr_loss_timeout_sec": 0, 00:31:30.584 "reconnect_delay_sec": 0, 00:31:30.584 "fast_io_fail_timeout_sec": 0, 00:31:30.584 "psk": "key0", 00:31:30.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.584 "hdgst": false, 00:31:30.584 "ddgst": false 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "bdev_nvme_set_hotplug", 00:31:30.584 "params": { 00:31:30.584 "period_us": 100000, 00:31:30.584 "enable": false 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "bdev_enable_histogram", 00:31:30.584 "params": { 00:31:30.584 "name": "nvme0n1", 00:31:30.584 "enable": true 00:31:30.584 } 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "method": "bdev_wait_for_examine" 00:31:30.584 } 00:31:30.584 ] 00:31:30.584 }, 00:31:30.584 { 00:31:30.584 "subsystem": "nbd", 00:31:30.584 "config": [] 00:31:30.584 } 00:31:30.584 ] 00:31:30.584 }' 00:31:30.584 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:30.584 23:12:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:30.584 [2024-07-22 23:12:06.721921] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:30.584 [2024-07-22 23:12:06.722021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947509 ] 00:31:30.584 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.584 [2024-07-22 23:12:06.799264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.845 [2024-07-22 23:12:06.909991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.845 [2024-07-22 23:12:07.107881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:31.105 23:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:31.105 23:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:31.105 23:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:31.105 23:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:31:31.365 23:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.365 23:12:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:31.626 Running I/O for 1 seconds... 00:31:32.590 00:31:32.590 Latency(us) 00:31:32.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.590 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:32.590 Verification LBA range: start 0x0 length 0x2000 00:31:32.590 nvme0n1 : 1.03 2585.89 10.10 0.00 0.00 48882.80 8883.77 42331.40 00:31:32.590 =================================================================================================================== 00:31:32.590 Total : 2585.89 10.10 0.00 0.00 48882.80 8883.77 42331.40 00:31:32.590 0 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:31:32.590 23:12:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:32.590 nvmf_trace.0 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 947509 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 947509 ']' 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 947509 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 947509 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 947509' 00:31:32.850 killing process with pid 947509 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 947509 00:31:32.850 Received shutdown signal, test time was about 1.000000 seconds 00:31:32.850 00:31:32.850 Latency(us) 00:31:32.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.850 =================================================================================================================== 00:31:32.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:32.850 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 947509 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:33.110 rmmod nvme_tcp 00:31:33.110 rmmod nvme_fabrics 00:31:33.110 rmmod nvme_keyring 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 947363 ']' 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 947363 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 947363 ']' 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 947363 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:33.110 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 947363 00:31:33.370 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:33.370 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:33.370 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 947363' 00:31:33.370 killing process with pid 947363 00:31:33.370 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 947363 00:31:33.370 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 947363 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.630 23:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hpaENrm6Ai /tmp/tmp.D3euJPNQAu /tmp/tmp.Lx0iZo1Cr9 00:31:36.173 00:31:36.173 real 1m38.818s 00:31:36.173 user 2m47.162s 00:31:36.173 sys 0m32.079s 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:36.173 ************************************ 00:31:36.173 END TEST nvmf_tls 00:31:36.173 ************************************ 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:36.173 ************************************ 00:31:36.173 START TEST nvmf_fips 00:31:36.173 ************************************ 00:31:36.173 23:12:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:31:36.173 * Looking for test storage... 00:31:36.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.173 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:36.174 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:31:36.175 Error setting digest 00:31:36.175 00A2F5ED987F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:31:36.175 00A2F5ED987F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:31:36.175 23:12:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:39.470 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:39.470 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:39.470 Found net devices under 0000:84:00.0: cvl_0_0 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:39.470 Found net devices under 0000:84:00.1: cvl_0_1 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.470 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:39.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:31:39.471 00:31:39.471 --- 10.0.0.2 ping statistics --- 00:31:39.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.471 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:31:39.471 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:31:39.731 00:31:39.731 --- 10.0.0.1 ping statistics --- 00:31:39.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.731 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=950397 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 950397 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 950397 ']' 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.731 23:12:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:39.731 [2024-07-22 23:12:16.020052] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:39.731 [2024-07-22 23:12:16.020218] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.992 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.992 [2024-07-22 23:12:16.141948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.992 [2024-07-22 23:12:16.255653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.992 [2024-07-22 23:12:16.255720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.992 [2024-07-22 23:12:16.255740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.992 [2024-07-22 23:12:16.255756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.992 [2024-07-22 23:12:16.255770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.992 [2024-07-22 23:12:16.255814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:40.252 23:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.822 [2024-07-22 23:12:17.088933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.822 [2024-07-22 23:12:17.104876] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:40.822 [2024-07-22 23:12:17.105144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.082 [2024-07-22 23:12:17.139350] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:41.082 malloc0 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=950548 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 950548 /var/tmp/bdevperf.sock 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 950548 ']' 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:41.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:41.082 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:41.082 [2024-07-22 23:12:17.319640] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:31:41.082 [2024-07-22 23:12:17.319745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950548 ] 00:31:41.082 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.341 [2024-07-22 23:12:17.429715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.341 [2024-07-22 23:12:17.541030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:41.600 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:41.601 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:31:41.601 23:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:42.167 [2024-07-22 23:12:18.206298] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:42.167 [2024-07-22 23:12:18.206461] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:42.167 TLSTESTn1 00:31:42.167 23:12:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:42.167 Running I/O for 10 seconds... 00:31:54.384 00:31:54.384 Latency(us) 00:31:54.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.384 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:54.384 Verification LBA range: start 0x0 length 0x2000 00:31:54.384 TLSTESTn1 : 10.03 2577.86 10.07 0.00 0.00 49551.57 9854.67 38253.61 00:31:54.384 =================================================================================================================== 00:31:54.384 Total : 2577.86 10.07 0.00 0.00 49551.57 9854.67 38253.61 00:31:54.384 0 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:54.384 nvmf_trace.0 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 950548 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 950548 ']' 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 950548 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950548 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950548' 00:31:54.384 killing process with pid 950548 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 950548 00:31:54.384 Received shutdown signal, test time was about 10.000000 seconds 00:31:54.384 00:31:54.384 Latency(us) 00:31:54.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.384 =================================================================================================================== 00:31:54.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:54.384 [2024-07-22 23:12:28.710834] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 950548 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:54.384 23:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:54.384 rmmod nvme_tcp 00:31:54.384 rmmod nvme_fabrics 00:31:54.384 rmmod nvme_keyring 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 950397 ']' 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 950397 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 950397 ']' 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 950397 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950397 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950397' 00:31:54.384 killing process with pid 950397 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 950397 00:31:54.384 [2024-07-22 23:12:29.088110] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 950397 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.384 23:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:55.325 00:31:55.325 real 0m19.511s 00:31:55.325 user 0m24.000s 00:31:55.325 sys 0m7.859s 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:55.325 ************************************ 00:31:55.325 END TEST nvmf_fips 00:31:55.325 ************************************ 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:55.325 ************************************ 00:31:55.325 START TEST nvmf_fuzz 00:31:55.325 ************************************ 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:31:55.325 * Looking for test storage... 00:31:55.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.325 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.586 23:12:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:31:58.880 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:58.881 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:58.881 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:58.881 Found net devices under 0000:84:00.0: cvl_0_0 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:58.881 Found net devices under 0000:84:00.1: cvl_0_1 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:58.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:31:58.881 00:31:58.881 --- 10.0.0.2 ping statistics --- 00:31:58.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.881 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:31:58.881 00:31:58.881 --- 10.0.0.1 ping statistics --- 00:31:58.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.881 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=953928 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 953928 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 953928 ']' 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:58.881 23:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:59.452 Malloc0 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:31:59.452 23:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:32:31.569 Fuzzing completed. Shutting down the fuzz application 00:32:31.569 00:32:31.569 Dumping successful admin opcodes: 00:32:31.569 8, 9, 10, 24, 00:32:31.569 Dumping successful io opcodes: 00:32:31.569 0, 9, 00:32:31.569 NS: 0x200003aeff00 I/O qp, Total commands completed: 357332, total successful commands: 2115, random_seed: 3889450816 00:32:31.569 NS: 0x200003aeff00 admin qp, Total commands completed: 42256, total successful commands: 344, random_seed: 3510725824 00:32:31.569 23:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:32:31.829 Fuzzing completed. Shutting down the fuzz application 00:32:31.829 00:32:31.829 Dumping successful admin opcodes: 00:32:31.829 24, 00:32:31.829 Dumping successful io opcodes: 00:32:31.829 00:32:31.829 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2743975484 00:32:31.829 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2744136492 00:32:31.829 23:13:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.829 23:13:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.829 23:13:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:31.829 rmmod nvme_tcp 00:32:31.829 rmmod nvme_fabrics 00:32:31.829 rmmod nvme_keyring 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 953928 ']' 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 953928 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 953928 ']' 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 953928 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953928 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953928' 00:32:31.829 killing process with pid 953928 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 953928 00:32:31.829 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 953928 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.398 23:13:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.307 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:34.307 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:32:34.307 00:32:34.307 real 0m39.062s 00:32:34.307 user 0m50.986s 00:32:34.307 sys 0m16.218s 00:32:34.307 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:34.307 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:32:34.307 ************************************ 00:32:34.307 END TEST nvmf_fuzz 00:32:34.307 ************************************ 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:34.567 ************************************ 00:32:34.567 START TEST nvmf_multiconnection 00:32:34.567 ************************************ 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:32:34.567 * Looking for test storage... 00:32:34.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.567 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:32:34.568 23:13:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:37.863 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:37.863 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:37.863 Found net devices under 0000:84:00.0: cvl_0_0 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:37.863 Found net devices under 0000:84:00.1: cvl_0_1 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:37.863 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.864 23:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:37.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:32:37.864 00:32:37.864 --- 10.0.0.2 ping statistics --- 00:32:37.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.864 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:32:37.864 00:32:37.864 --- 10.0.0.1 ping statistics --- 00:32:37.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.864 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=959535 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 959535 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 959535 ']' 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:37.864 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.124 [2024-07-22 23:13:14.250630] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:32:38.124 [2024-07-22 23:13:14.250799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.124 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.124 [2024-07-22 23:13:14.404644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:38.383 [2024-07-22 23:13:14.559465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.383 [2024-07-22 23:13:14.559531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.383 [2024-07-22 23:13:14.559551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.383 [2024-07-22 23:13:14.559566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.383 [2024-07-22 23:13:14.559580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.383 [2024-07-22 23:13:14.559751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.383 [2024-07-22 23:13:14.559816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.383 [2024-07-22 23:13:14.559900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.383 [2024-07-22 23:13:14.559906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.643 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 [2024-07-22 23:13:14.747606] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 Malloc1 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 [2024-07-22 23:13:14.818196] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 Malloc2 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 Malloc3 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.644 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.904 Malloc4 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.904 23:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.904 Malloc5 00:32:38.904 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.904 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:32:38.904 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.904 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 Malloc6 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 Malloc7 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 Malloc8 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.905 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 Malloc9 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 Malloc10 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 Malloc11 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:39.165 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:39.736 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:32:39.736 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:39.736 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:39.736 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:39.736 23:13:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:32:42.274 23:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:42.274 23:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:42.274 23:13:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:32:42.274 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:42.274 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:42.274 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:32:42.274 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:42.274 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:32:42.534 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:32:42.534 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:42.534 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:42.534 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:42.534 23:13:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:44.443 23:13:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:32:45.380 23:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:32:45.380 23:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:45.380 23:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:45.380 23:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:45.380 23:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:47.308 23:13:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:32:47.889 23:13:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:32:47.889 23:13:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:47.889 23:13:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:47.889 23:13:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:47.889 23:13:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:32:50.430 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:50.431 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:50.431 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:32:50.431 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:50.431 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:50.431 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:32:50.431 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:50.431 23:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:32:51.000 23:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:32:51.000 23:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:51.000 23:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:51.000 23:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:51.000 23:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:52.912 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:32:53.851 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:32:53.851 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:53.851 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:53.851 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:53.851 23:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:55.760 23:13:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:32:56.700 23:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:32:56.700 23:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:56.700 23:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:56.700 23:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:56.700 23:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:58.611 23:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:32:59.553 23:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:32:59.553 23:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:32:59.553 23:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:59.553 23:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:59.553 23:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:33:01.456 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:01.456 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:01.456 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:33:01.456 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:01.456 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:01.456 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:33:01.456 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:01.457 23:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:33:02.392 23:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:33:02.392 23:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:33:02.392 23:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:02.392 23:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:02.392 23:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:04.298 23:13:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:33:05.238 23:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:33:05.238 23:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:33:05.238 23:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:05.238 23:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:05.238 23:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:07.144 23:13:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:33:08.082 23:13:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:33:08.082 23:13:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:33:08.082 23:13:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:08.082 23:13:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:08.082 23:13:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:33:10.011 23:13:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:10.011 23:13:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:10.011 23:13:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:33:10.011 23:13:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:10.011 23:13:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:10.011 23:13:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:33:10.011 23:13:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:33:10.012 [global] 00:33:10.012 thread=1 00:33:10.012 invalidate=1 00:33:10.012 rw=read 00:33:10.012 time_based=1 00:33:10.012 runtime=10 00:33:10.012 ioengine=libaio 00:33:10.012 direct=1 00:33:10.012 bs=262144 00:33:10.012 iodepth=64 00:33:10.012 norandommap=1 00:33:10.012 numjobs=1 00:33:10.012 00:33:10.012 [job0] 00:33:10.012 filename=/dev/nvme0n1 00:33:10.012 [job1] 00:33:10.012 filename=/dev/nvme10n1 00:33:10.012 [job2] 00:33:10.012 filename=/dev/nvme1n1 00:33:10.012 [job3] 00:33:10.012 filename=/dev/nvme2n1 00:33:10.012 [job4] 00:33:10.012 filename=/dev/nvme3n1 00:33:10.012 [job5] 00:33:10.012 filename=/dev/nvme4n1 00:33:10.012 [job6] 00:33:10.012 filename=/dev/nvme5n1 00:33:10.012 [job7] 00:33:10.012 filename=/dev/nvme6n1 00:33:10.012 [job8] 00:33:10.012 filename=/dev/nvme7n1 00:33:10.012 [job9] 00:33:10.012 filename=/dev/nvme8n1 00:33:10.012 [job10] 00:33:10.012 filename=/dev/nvme9n1 00:33:10.270 Could not set queue depth (nvme0n1) 00:33:10.270 Could not set queue depth (nvme10n1) 00:33:10.270 Could not set queue depth (nvme1n1) 00:33:10.270 Could not set queue depth (nvme2n1) 00:33:10.270 Could not set queue depth (nvme3n1) 00:33:10.270 Could not set queue depth (nvme4n1) 00:33:10.270 Could not set queue depth (nvme5n1) 00:33:10.270 Could not set queue depth (nvme6n1) 00:33:10.270 Could not set queue depth (nvme7n1) 00:33:10.270 Could not set queue depth (nvme8n1) 00:33:10.270 Could not set queue depth (nvme9n1) 00:33:10.270 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:10.270 fio-3.35 00:33:10.270 Starting 11 threads 00:33:22.506 00:33:22.506 job0: (groupid=0, jobs=1): err= 0: pid=963625: Mon Jul 22 23:13:57 2024 00:33:22.506 read: IOPS=526, BW=132MiB/s (138MB/s)(1325MiB/10066msec) 00:33:22.506 slat (usec): min=11, max=168233, avg=991.07, stdev=5864.28 00:33:22.507 clat (usec): min=977, max=314732, avg=120376.99, stdev=64598.10 00:33:22.507 lat (usec): min=1035, max=324282, avg=121368.06, stdev=65268.87 00:33:22.507 clat percentiles (msec): 00:33:22.507 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 23], 20.00th=[ 62], 00:33:22.507 | 30.00th=[ 90], 40.00th=[ 110], 50.00th=[ 124], 60.00th=[ 136], 00:33:22.507 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 197], 95.00th=[ 232], 00:33:22.507 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 313], 00:33:22.507 | 99.99th=[ 317] 00:33:22.507 bw ( KiB/s): min=86016, max=277504, per=9.09%, avg=134092.80, stdev=45487.51, samples=20 00:33:22.507 iops : min= 336, max= 1084, avg=523.80, stdev=177.69, samples=20 00:33:22.507 lat (usec) : 1000=0.02% 00:33:22.507 lat (msec) : 2=0.40%, 4=0.57%, 10=4.07%, 20=3.89%, 50=8.21% 00:33:22.507 lat (msec) : 100=18.05%, 250=61.40%, 500=3.40% 00:33:22.507 cpu : usr=0.38%, sys=2.20%, ctx=1087, majf=0, minf=3721 00:33:22.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:22.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.507 issued rwts: total=5301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.507 job1: (groupid=0, jobs=1): err= 0: pid=963626: Mon Jul 22 23:13:57 2024 00:33:22.507 read: IOPS=546, BW=137MiB/s (143MB/s)(1373MiB/10043msec) 00:33:22.507 slat (usec): min=11, max=141161, avg=1053.06, stdev=6035.71 00:33:22.507 clat (usec): min=1130, max=333829, avg=115848.61, stdev=73166.43 00:33:22.507 lat (usec): min=1163, max=354298, avg=116901.66, stdev=73991.37 00:33:22.507 clat percentiles (msec): 00:33:22.507 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 18], 20.00th=[ 46], 00:33:22.507 | 30.00th=[ 69], 40.00th=[ 85], 50.00th=[ 104], 60.00th=[ 136], 00:33:22.507 | 70.00th=[ 161], 80.00th=[ 186], 90.00th=[ 222], 95.00th=[ 243], 00:33:22.507 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 296], 99.95th=[ 326], 00:33:22.507 | 99.99th=[ 334] 00:33:22.507 bw ( KiB/s): min=67072, max=235520, per=9.42%, avg=138984.50, stdev=53498.58, samples=20 00:33:22.507 iops : min= 262, max= 920, avg=542.85, stdev=208.94, samples=20 00:33:22.507 lat (msec) : 2=0.20%, 4=0.84%, 10=4.84%, 20=4.99%, 50=10.74% 00:33:22.507 lat (msec) : 100=26.42%, 250=48.93%, 500=3.04% 00:33:22.507 cpu : usr=0.34%, sys=2.40%, ctx=1073, majf=0, minf=4097 00:33:22.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:22.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.507 issued rwts: total=5492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.507 job2: (groupid=0, jobs=1): err= 0: pid=963629: Mon Jul 22 23:13:57 2024 00:33:22.507 read: IOPS=535, BW=134MiB/s (140MB/s)(1353MiB/10107msec) 00:33:22.507 slat (usec): min=12, max=111813, avg=1268.73, stdev=5847.61 00:33:22.507 clat (usec): min=1752, max=358653, avg=118083.71, stdev=65915.67 00:33:22.507 lat (usec): min=1776, max=358686, avg=119352.43, stdev=66703.03 00:33:22.507 clat percentiles (msec): 00:33:22.507 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 49], 00:33:22.507 | 30.00th=[ 77], 40.00th=[ 103], 50.00th=[ 124], 60.00th=[ 142], 00:33:22.507 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 203], 95.00th=[ 220], 00:33:22.507 | 99.00th=[ 262], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 300], 00:33:22.507 | 99.99th=[ 359] 00:33:22.507 bw ( KiB/s): min=84992, max=342528, per=9.28%, avg=136934.40, stdev=59872.43, samples=20 00:33:22.507 iops : min= 332, max= 1338, avg=534.90, stdev=233.88, samples=20 00:33:22.507 lat (msec) : 2=0.04%, 4=0.09%, 10=4.16%, 20=5.19%, 50=10.92% 00:33:22.507 lat (msec) : 100=18.53%, 250=59.08%, 500=2.00% 00:33:22.507 cpu : usr=0.31%, sys=2.43%, ctx=986, majf=0, minf=4097 00:33:22.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:22.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.507 issued rwts: total=5413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.507 job3: (groupid=0, jobs=1): err= 0: pid=963630: Mon Jul 22 23:13:57 2024 00:33:22.507 read: IOPS=563, BW=141MiB/s (148MB/s)(1425MiB/10106msec) 00:33:22.507 slat (usec): min=10, max=162483, avg=978.15, stdev=5659.00 00:33:22.507 clat (usec): min=887, max=406584, avg=112377.60, stdev=68701.15 00:33:22.507 lat (usec): min=913, max=406601, avg=113355.75, stdev=69311.68 00:33:22.507 clat percentiles (msec): 00:33:22.507 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 48], 00:33:22.507 | 30.00th=[ 67], 40.00th=[ 83], 50.00th=[ 105], 60.00th=[ 130], 00:33:22.507 | 70.00th=[ 161], 80.00th=[ 180], 90.00th=[ 209], 95.00th=[ 226], 00:33:22.507 | 99.00th=[ 259], 99.50th=[ 264], 99.90th=[ 279], 99.95th=[ 292], 00:33:22.507 | 99.99th=[ 405] 00:33:22.507 bw ( KiB/s): min=65536, max=228864, per=9.78%, avg=144281.60, stdev=46466.30, samples=20 00:33:22.507 iops : min= 256, max= 894, avg=563.60, stdev=181.51, samples=20 00:33:22.507 lat (usec) : 1000=0.05% 00:33:22.507 lat (msec) : 2=0.25%, 4=1.16%, 10=2.39%, 20=6.21%, 50=11.14% 00:33:22.507 lat (msec) : 100=27.46%, 250=50.03%, 500=1.32% 00:33:22.507 cpu : usr=0.44%, sys=2.22%, ctx=969, majf=0, minf=4097 00:33:22.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:22.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.507 issued rwts: total=5699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.507 job4: (groupid=0, jobs=1): err= 0: pid=963631: Mon Jul 22 23:13:57 2024 00:33:22.507 read: IOPS=462, BW=116MiB/s (121MB/s)(1168MiB/10104msec) 00:33:22.507 slat (usec): min=10, max=106884, avg=1137.92, stdev=5486.17 00:33:22.507 clat (msec): min=2, max=295, avg=137.06, stdev=57.84 00:33:22.507 lat (msec): min=2, max=312, avg=138.20, stdev=58.44 00:33:22.507 clat percentiles (msec): 00:33:22.507 | 1.00th=[ 18], 5.00th=[ 44], 10.00th=[ 63], 20.00th=[ 85], 00:33:22.507 | 30.00th=[ 104], 40.00th=[ 121], 50.00th=[ 138], 60.00th=[ 155], 00:33:22.507 | 70.00th=[ 171], 80.00th=[ 188], 90.00th=[ 213], 95.00th=[ 236], 00:33:22.507 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 296], 00:33:22.507 | 99.99th=[ 296] 00:33:22.507 bw ( KiB/s): min=66048, max=176128, per=8.00%, avg=118002.75, stdev=25431.66, samples=20 00:33:22.507 iops : min= 258, max= 688, avg=460.90, stdev=99.33, samples=20 00:33:22.507 lat (msec) : 4=0.17%, 10=0.41%, 20=0.77%, 50=5.31%, 100=21.38% 00:33:22.507 lat (msec) : 250=69.38%, 500=2.59% 00:33:22.507 cpu : usr=0.46%, sys=1.87%, ctx=964, majf=0, minf=4097 00:33:22.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:22.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.507 issued rwts: total=4673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.507 job5: (groupid=0, jobs=1): err= 0: pid=963634: Mon Jul 22 23:13:57 2024 00:33:22.507 read: IOPS=488, BW=122MiB/s (128MB/s)(1233MiB/10108msec) 00:33:22.507 slat (usec): min=11, max=90805, avg=869.02, stdev=4488.06 00:33:22.507 clat (usec): min=791, max=358371, avg=130038.10, stdev=72568.56 00:33:22.507 lat (usec): min=834, max=358394, avg=130907.11, stdev=73102.91 00:33:22.507 clat percentiles (msec): 00:33:22.507 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 27], 20.00th=[ 49], 00:33:22.507 | 30.00th=[ 84], 40.00th=[ 114], 50.00th=[ 140], 60.00th=[ 163], 00:33:22.507 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 245], 00:33:22.507 | 99.00th=[ 279], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 359], 00:33:22.507 | 99.99th=[ 359] 00:33:22.507 bw ( KiB/s): min=69632, max=217088, per=8.45%, avg=124672.00, stdev=46236.78, samples=20 00:33:22.507 iops : min= 272, max= 848, avg=487.00, stdev=180.61, samples=20 00:33:22.507 lat (usec) : 1000=0.02% 00:33:22.507 lat (msec) : 2=0.47%, 4=0.10%, 10=2.90%, 20=2.82%, 50=14.05% 00:33:22.507 lat (msec) : 100=14.78%, 250=60.47%, 500=4.40% 00:33:22.507 cpu : usr=0.34%, sys=2.10%, ctx=1081, majf=0, minf=4097 00:33:22.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:33:22.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.507 issued rwts: total=4933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.507 job6: (groupid=0, jobs=1): err= 0: pid=963635: Mon Jul 22 23:13:57 2024 00:33:22.507 read: IOPS=664, BW=166MiB/s (174MB/s)(1680MiB/10109msec) 00:33:22.507 slat (usec): min=11, max=89940, avg=799.08, stdev=4332.63 00:33:22.507 clat (usec): min=1219, max=301248, avg=95402.05, stdev=57336.74 00:33:22.507 lat (usec): min=1282, max=320428, avg=96201.13, stdev=57817.61 00:33:22.507 clat percentiles (msec): 00:33:22.507 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 43], 00:33:22.507 | 30.00th=[ 57], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 110], 00:33:22.507 | 70.00th=[ 128], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 192], 00:33:22.507 | 99.00th=[ 230], 99.50th=[ 236], 99.90th=[ 266], 99.95th=[ 266], 00:33:22.507 | 99.99th=[ 300] 00:33:22.507 bw ( KiB/s): min=85504, max=379904, per=11.54%, avg=170312.80, stdev=67263.99, samples=20 00:33:22.507 iops : min= 334, max= 1484, avg=665.25, stdev=262.75, samples=20 00:33:22.507 lat (msec) : 2=0.39%, 4=0.15%, 10=3.83%, 20=6.71%, 50=14.53% 00:33:22.507 lat (msec) : 100=30.25%, 250=43.96%, 500=0.19% 00:33:22.507 cpu : usr=0.36%, sys=2.50%, ctx=1155, majf=0, minf=4097 00:33:22.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:22.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.507 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.507 job7: (groupid=0, jobs=1): err= 0: pid=963636: Mon Jul 22 23:13:57 2024 00:33:22.507 read: IOPS=485, BW=121MiB/s (127MB/s)(1227MiB/10109msec) 00:33:22.508 slat (usec): min=10, max=127180, avg=1344.76, stdev=6490.29 00:33:22.508 clat (usec): min=790, max=362839, avg=130310.74, stdev=70256.79 00:33:22.508 lat (usec): min=815, max=377287, avg=131655.50, stdev=71377.13 00:33:22.508 clat percentiles (msec): 00:33:22.508 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 55], 00:33:22.508 | 30.00th=[ 97], 40.00th=[ 123], 50.00th=[ 140], 60.00th=[ 153], 00:33:22.508 | 70.00th=[ 165], 80.00th=[ 182], 90.00th=[ 215], 95.00th=[ 253], 00:33:22.508 | 99.00th=[ 292], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 347], 00:33:22.508 | 99.99th=[ 363] 00:33:22.508 bw ( KiB/s): min=62464, max=234496, per=8.40%, avg=124006.40, stdev=49216.69, samples=20 00:33:22.508 iops : min= 244, max= 916, avg=484.40, stdev=192.25, samples=20 00:33:22.508 lat (usec) : 1000=0.04% 00:33:22.508 lat (msec) : 2=0.24%, 4=1.00%, 10=2.38%, 20=3.22%, 50=12.31% 00:33:22.508 lat (msec) : 100=11.29%, 250=63.79%, 500=5.73% 00:33:22.508 cpu : usr=0.30%, sys=2.15%, ctx=1023, majf=0, minf=4097 00:33:22.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:22.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.508 issued rwts: total=4907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.508 job8: (groupid=0, jobs=1): err= 0: pid=963638: Mon Jul 22 23:13:57 2024 00:33:22.508 read: IOPS=538, BW=135MiB/s (141MB/s)(1356MiB/10071msec) 00:33:22.508 slat (usec): min=12, max=133553, avg=1060.85, stdev=5676.25 00:33:22.508 clat (usec): min=937, max=333494, avg=117707.77, stdev=67655.59 00:33:22.508 lat (usec): min=957, max=373224, avg=118768.62, stdev=68498.64 00:33:22.508 clat percentiles (msec): 00:33:22.508 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 53], 00:33:22.508 | 30.00th=[ 81], 40.00th=[ 96], 50.00th=[ 116], 60.00th=[ 138], 00:33:22.508 | 70.00th=[ 159], 80.00th=[ 178], 90.00th=[ 203], 95.00th=[ 230], 00:33:22.508 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 330], 00:33:22.508 | 99.99th=[ 334] 00:33:22.508 bw ( KiB/s): min=72192, max=236544, per=9.29%, avg=137181.40, stdev=42931.09, samples=20 00:33:22.508 iops : min= 282, max= 924, avg=535.85, stdev=167.69, samples=20 00:33:22.508 lat (usec) : 1000=0.02% 00:33:22.508 lat (msec) : 2=0.77%, 4=1.57%, 10=2.66%, 20=4.65%, 50=10.05% 00:33:22.508 lat (msec) : 100=22.61%, 250=54.85%, 500=2.82% 00:33:22.508 cpu : usr=0.48%, sys=1.95%, ctx=1078, majf=0, minf=4097 00:33:22.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:22.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.508 issued rwts: total=5422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.508 job9: (groupid=0, jobs=1): err= 0: pid=963639: Mon Jul 22 23:13:57 2024 00:33:22.508 read: IOPS=448, BW=112MiB/s (117MB/s)(1132MiB/10104msec) 00:33:22.508 slat (usec): min=10, max=112038, avg=1115.66, stdev=5915.73 00:33:22.508 clat (usec): min=1695, max=324886, avg=141519.26, stdev=73548.22 00:33:22.508 lat (usec): min=1726, max=327316, avg=142634.93, stdev=74429.09 00:33:22.508 clat percentiles (msec): 00:33:22.508 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 18], 20.00th=[ 63], 00:33:22.508 | 30.00th=[ 113], 40.00th=[ 136], 50.00th=[ 155], 60.00th=[ 169], 00:33:22.508 | 70.00th=[ 186], 80.00th=[ 207], 90.00th=[ 230], 95.00th=[ 251], 00:33:22.508 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:33:22.508 | 99.99th=[ 326] 00:33:22.508 bw ( KiB/s): min=75264, max=207360, per=7.74%, avg=114268.70, stdev=32864.96, samples=20 00:33:22.508 iops : min= 294, max= 810, avg=446.35, stdev=128.39, samples=20 00:33:22.508 lat (msec) : 2=0.62%, 4=1.97%, 10=4.44%, 20=3.36%, 50=5.81% 00:33:22.508 lat (msec) : 100=10.32%, 250=68.68%, 500=4.82% 00:33:22.508 cpu : usr=0.37%, sys=1.99%, ctx=1073, majf=0, minf=4097 00:33:22.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:22.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.508 issued rwts: total=4527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.508 job10: (groupid=0, jobs=1): err= 0: pid=963640: Mon Jul 22 23:13:57 2024 00:33:22.508 read: IOPS=516, BW=129MiB/s (135MB/s)(1299MiB/10063msec) 00:33:22.508 slat (usec): min=10, max=204073, avg=827.19, stdev=5354.81 00:33:22.508 clat (usec): min=1414, max=309901, avg=122948.78, stdev=69036.34 00:33:22.508 lat (usec): min=1454, max=450862, avg=123775.98, stdev=69574.09 00:33:22.508 clat percentiles (msec): 00:33:22.508 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 49], 00:33:22.508 | 30.00th=[ 83], 40.00th=[ 110], 50.00th=[ 130], 60.00th=[ 150], 00:33:22.508 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 205], 95.00th=[ 228], 00:33:22.508 | 99.00th=[ 271], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:33:22.508 | 99.99th=[ 309] 00:33:22.508 bw ( KiB/s): min=75927, max=228918, per=8.90%, avg=131363.85, stdev=39570.75, samples=20 00:33:22.508 iops : min= 296, max= 894, avg=513.10, stdev=154.59, samples=20 00:33:22.508 lat (msec) : 2=0.02%, 4=0.92%, 10=4.79%, 20=4.85%, 50=9.78% 00:33:22.508 lat (msec) : 100=14.40%, 250=63.51%, 500=1.73% 00:33:22.508 cpu : usr=0.22%, sys=2.22%, ctx=1107, majf=0, minf=4097 00:33:22.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:22.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:22.508 issued rwts: total=5196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:22.508 00:33:22.508 Run status group 0 (all jobs): 00:33:22.508 READ: bw=1441MiB/s (1511MB/s), 112MiB/s-166MiB/s (117MB/s-174MB/s), io=14.2GiB (15.3GB), run=10043-10109msec 00:33:22.508 00:33:22.508 Disk stats (read/write): 00:33:22.508 nvme0n1: ios=10350/0, merge=0/0, ticks=1240338/0, in_queue=1240338, util=97.00% 00:33:22.508 nvme10n1: ios=10686/0, merge=0/0, ticks=1240327/0, in_queue=1240327, util=97.16% 00:33:22.508 nvme1n1: ios=10584/0, merge=0/0, ticks=1235073/0, in_queue=1235073, util=97.48% 00:33:22.508 nvme2n1: ios=11191/0, merge=0/0, ticks=1239412/0, in_queue=1239412, util=97.63% 00:33:22.508 nvme3n1: ios=9163/0, merge=0/0, ticks=1239158/0, in_queue=1239158, util=97.70% 00:33:22.508 nvme4n1: ios=9658/0, merge=0/0, ticks=1237541/0, in_queue=1237541, util=98.04% 00:33:22.508 nvme5n1: ios=13202/0, merge=0/0, ticks=1239673/0, in_queue=1239673, util=98.24% 00:33:22.508 nvme6n1: ios=9620/0, merge=0/0, ticks=1235392/0, in_queue=1235392, util=98.32% 00:33:22.508 nvme7n1: ios=10624/0, merge=0/0, ticks=1237442/0, in_queue=1237442, util=98.82% 00:33:22.508 nvme8n1: ios=8873/0, merge=0/0, ticks=1238215/0, in_queue=1238215, util=99.04% 00:33:22.508 nvme9n1: ios=10068/0, merge=0/0, ticks=1242165/0, in_queue=1242165, util=99.22% 00:33:22.508 23:13:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:33:22.508 [global] 00:33:22.508 thread=1 00:33:22.508 invalidate=1 00:33:22.508 rw=randwrite 00:33:22.508 time_based=1 00:33:22.508 runtime=10 00:33:22.508 ioengine=libaio 00:33:22.508 direct=1 00:33:22.508 bs=262144 00:33:22.508 iodepth=64 00:33:22.508 norandommap=1 00:33:22.508 numjobs=1 00:33:22.508 00:33:22.508 [job0] 00:33:22.508 filename=/dev/nvme0n1 00:33:22.508 [job1] 00:33:22.508 filename=/dev/nvme10n1 00:33:22.508 [job2] 00:33:22.508 filename=/dev/nvme1n1 00:33:22.508 [job3] 00:33:22.508 filename=/dev/nvme2n1 00:33:22.508 [job4] 00:33:22.508 filename=/dev/nvme3n1 00:33:22.508 [job5] 00:33:22.508 filename=/dev/nvme4n1 00:33:22.508 [job6] 00:33:22.508 filename=/dev/nvme5n1 00:33:22.508 [job7] 00:33:22.508 filename=/dev/nvme6n1 00:33:22.508 [job8] 00:33:22.508 filename=/dev/nvme7n1 00:33:22.508 [job9] 00:33:22.508 filename=/dev/nvme8n1 00:33:22.508 [job10] 00:33:22.508 filename=/dev/nvme9n1 00:33:22.508 Could not set queue depth (nvme0n1) 00:33:22.508 Could not set queue depth (nvme10n1) 00:33:22.508 Could not set queue depth (nvme1n1) 00:33:22.508 Could not set queue depth (nvme2n1) 00:33:22.508 Could not set queue depth (nvme3n1) 00:33:22.508 Could not set queue depth (nvme4n1) 00:33:22.508 Could not set queue depth (nvme5n1) 00:33:22.508 Could not set queue depth (nvme6n1) 00:33:22.508 Could not set queue depth (nvme7n1) 00:33:22.508 Could not set queue depth (nvme8n1) 00:33:22.508 Could not set queue depth (nvme9n1) 00:33:22.508 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.508 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:33:22.509 fio-3.35 00:33:22.509 Starting 11 threads 00:33:32.512 00:33:32.512 job0: (groupid=0, jobs=1): err= 0: pid=964652: Mon Jul 22 23:14:08 2024 00:33:32.512 write: IOPS=377, BW=94.4MiB/s (99.0MB/s)(960MiB/10171msec); 0 zone resets 00:33:32.512 slat (usec): min=32, max=54640, avg=1306.36, stdev=4210.12 00:33:32.512 clat (usec): min=1169, max=401825, avg=168063.23, stdev=82202.50 00:33:32.512 lat (usec): min=1238, max=401869, avg=169369.59, stdev=82956.14 00:33:32.512 clat percentiles (msec): 00:33:32.512 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 87], 00:33:32.512 | 30.00th=[ 125], 40.00th=[ 157], 50.00th=[ 188], 60.00th=[ 207], 00:33:32.512 | 70.00th=[ 220], 80.00th=[ 239], 90.00th=[ 259], 95.00th=[ 288], 00:33:32.512 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 393], 99.95th=[ 401], 00:33:32.512 | 99.99th=[ 401] 00:33:32.512 bw ( KiB/s): min=71680, max=164864, per=8.14%, avg=96716.80, stdev=25318.14, samples=20 00:33:32.512 iops : min= 280, max= 644, avg=377.80, stdev=98.90, samples=20 00:33:32.512 lat (msec) : 2=0.26%, 4=0.21%, 10=0.42%, 20=3.46%, 50=8.49% 00:33:32.512 lat (msec) : 100=11.66%, 250=61.39%, 500=14.11% 00:33:32.512 cpu : usr=1.61%, sys=1.69%, ctx=2599, majf=0, minf=1 00:33:32.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:33:32.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.512 issued rwts: total=0,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.512 job1: (groupid=0, jobs=1): err= 0: pid=964653: Mon Jul 22 23:14:08 2024 00:33:32.512 write: IOPS=418, BW=105MiB/s (110MB/s)(1056MiB/10084msec); 0 zone resets 00:33:32.512 slat (usec): min=26, max=62838, avg=1291.35, stdev=3840.66 00:33:32.512 clat (msec): min=6, max=370, avg=151.38, stdev=68.39 00:33:32.512 lat (msec): min=6, max=370, avg=152.67, stdev=69.18 00:33:32.512 clat percentiles (msec): 00:33:32.512 | 1.00th=[ 19], 5.00th=[ 38], 10.00th=[ 55], 20.00th=[ 85], 00:33:32.512 | 30.00th=[ 114], 40.00th=[ 133], 50.00th=[ 157], 60.00th=[ 171], 00:33:32.512 | 70.00th=[ 199], 80.00th=[ 211], 90.00th=[ 236], 95.00th=[ 262], 00:33:32.512 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 342], 99.95th=[ 355], 00:33:32.512 | 99.99th=[ 372] 00:33:32.512 bw ( KiB/s): min=73728, max=171520, per=8.97%, avg=106555.00, stdev=25257.17, samples=20 00:33:32.512 iops : min= 288, max= 670, avg=416.20, stdev=98.70, samples=20 00:33:32.512 lat (msec) : 10=0.14%, 20=1.04%, 50=7.46%, 100=15.46%, 250=68.47% 00:33:32.512 lat (msec) : 500=7.43% 00:33:32.512 cpu : usr=2.05%, sys=1.79%, ctx=2626, majf=0, minf=1 00:33:32.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:32.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.512 issued rwts: total=0,4225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.512 job2: (groupid=0, jobs=1): err= 0: pid=964654: Mon Jul 22 23:14:08 2024 00:33:32.512 write: IOPS=393, BW=98.4MiB/s (103MB/s)(1007MiB/10230msec); 0 zone resets 00:33:32.512 slat (usec): min=21, max=127986, avg=1495.83, stdev=4588.05 00:33:32.512 clat (usec): min=1152, max=471715, avg=160969.33, stdev=78297.60 00:33:32.512 lat (usec): min=1223, max=471807, avg=162465.16, stdev=78952.59 00:33:32.512 clat percentiles (msec): 00:33:32.512 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 46], 20.00th=[ 93], 00:33:32.512 | 30.00th=[ 121], 40.00th=[ 146], 50.00th=[ 165], 60.00th=[ 188], 00:33:32.512 | 70.00th=[ 207], 80.00th=[ 222], 90.00th=[ 249], 95.00th=[ 268], 00:33:32.512 | 99.00th=[ 380], 99.50th=[ 426], 99.90th=[ 464], 99.95th=[ 468], 00:33:32.512 | 99.99th=[ 472] 00:33:32.512 bw ( KiB/s): min=79872, max=153600, per=8.55%, avg=101478.40, stdev=18774.35, samples=20 00:33:32.512 iops : min= 312, max= 600, avg=396.40, stdev=73.34, samples=20 00:33:32.512 lat (msec) : 2=0.22%, 4=0.52%, 10=0.77%, 20=1.44%, 50=7.67% 00:33:32.512 lat (msec) : 100=10.90%, 250=68.81%, 500=9.66% 00:33:32.512 cpu : usr=1.55%, sys=1.80%, ctx=2462, majf=0, minf=1 00:33:32.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:33:32.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.512 issued rwts: total=0,4027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.512 job3: (groupid=0, jobs=1): err= 0: pid=964655: Mon Jul 22 23:14:08 2024 00:33:32.512 write: IOPS=405, BW=101MiB/s (106MB/s)(1037MiB/10230msec); 0 zone resets 00:33:32.512 slat (usec): min=19, max=59078, avg=1459.19, stdev=4251.54 00:33:32.512 clat (usec): min=1224, max=503531, avg=156225.77, stdev=85519.42 00:33:32.512 lat (usec): min=1286, max=503611, avg=157684.96, stdev=86413.98 00:33:32.512 clat percentiles (msec): 00:33:32.512 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 63], 00:33:32.512 | 30.00th=[ 106], 40.00th=[ 142], 50.00th=[ 167], 60.00th=[ 186], 00:33:32.512 | 70.00th=[ 203], 80.00th=[ 224], 90.00th=[ 264], 95.00th=[ 296], 00:33:32.512 | 99.00th=[ 359], 99.50th=[ 418], 99.90th=[ 489], 99.95th=[ 506], 00:33:32.512 | 99.99th=[ 506] 00:33:32.512 bw ( KiB/s): min=61440, max=229376, per=8.81%, avg=104576.00, stdev=38556.83, samples=20 00:33:32.512 iops : min= 240, max= 896, avg=408.50, stdev=150.61, samples=20 00:33:32.512 lat (msec) : 2=0.17%, 4=0.36%, 10=1.01%, 20=2.53%, 50=9.47% 00:33:32.512 lat (msec) : 100=15.23%, 250=59.41%, 500=11.74%, 750=0.07% 00:33:32.512 cpu : usr=1.76%, sys=1.75%, ctx=2566, majf=0, minf=1 00:33:32.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:32.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.512 issued rwts: total=0,4149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.512 job4: (groupid=0, jobs=1): err= 0: pid=964656: Mon Jul 22 23:14:08 2024 00:33:32.512 write: IOPS=464, BW=116MiB/s (122MB/s)(1176MiB/10119msec); 0 zone resets 00:33:32.512 slat (usec): min=27, max=80866, avg=1221.75, stdev=3920.56 00:33:32.512 clat (usec): min=976, max=479590, avg=136292.93, stdev=82303.75 00:33:32.512 lat (usec): min=1006, max=486972, avg=137514.68, stdev=83140.05 00:33:32.512 clat percentiles (msec): 00:33:32.512 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 40], 20.00th=[ 61], 00:33:32.512 | 30.00th=[ 85], 40.00th=[ 109], 50.00th=[ 130], 60.00th=[ 155], 00:33:32.512 | 70.00th=[ 171], 80.00th=[ 197], 90.00th=[ 234], 95.00th=[ 271], 00:33:32.512 | 99.00th=[ 405], 99.50th=[ 422], 99.90th=[ 464], 99.95th=[ 468], 00:33:32.512 | 99.99th=[ 481] 00:33:32.512 bw ( KiB/s): min=40960, max=215040, per=10.01%, avg=118816.80, stdev=41747.02, samples=20 00:33:32.512 iops : min= 160, max= 840, avg=464.10, stdev=163.11, samples=20 00:33:32.512 lat (usec) : 1000=0.02% 00:33:32.512 lat (msec) : 2=0.17%, 4=0.28%, 10=1.85%, 20=2.42%, 50=7.70% 00:33:32.512 lat (msec) : 100=23.85%, 250=56.53%, 500=7.19% 00:33:32.512 cpu : usr=2.00%, sys=1.99%, ctx=2896, majf=0, minf=1 00:33:32.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:32.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.512 issued rwts: total=0,4704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.512 job5: (groupid=0, jobs=1): err= 0: pid=964668: Mon Jul 22 23:14:08 2024 00:33:32.513 write: IOPS=423, BW=106MiB/s (111MB/s)(1078MiB/10179msec); 0 zone resets 00:33:32.513 slat (usec): min=27, max=163295, avg=1291.24, stdev=4890.91 00:33:32.513 clat (usec): min=1565, max=477664, avg=149556.48, stdev=82572.47 00:33:32.513 lat (usec): min=1723, max=477812, avg=150847.72, stdev=83048.79 00:33:32.513 clat percentiles (msec): 00:33:32.513 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 90], 00:33:32.513 | 30.00th=[ 108], 40.00th=[ 114], 50.00th=[ 134], 60.00th=[ 157], 00:33:32.513 | 70.00th=[ 184], 80.00th=[ 209], 90.00th=[ 271], 95.00th=[ 305], 00:33:32.513 | 99.00th=[ 393], 99.50th=[ 426], 99.90th=[ 464], 99.95th=[ 468], 00:33:32.513 | 99.99th=[ 477] 00:33:32.513 bw ( KiB/s): min=70144, max=154112, per=9.16%, avg=108723.20, stdev=22778.49, samples=20 00:33:32.513 iops : min= 274, max= 602, avg=424.70, stdev=88.98, samples=20 00:33:32.513 lat (msec) : 2=0.07%, 4=0.14%, 10=0.79%, 20=1.79%, 50=7.80% 00:33:32.513 lat (msec) : 100=12.92%, 250=64.62%, 500=11.88% 00:33:32.513 cpu : usr=1.86%, sys=2.33%, ctx=2634, majf=0, minf=1 00:33:32.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:33:32.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.513 issued rwts: total=0,4310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.513 job6: (groupid=0, jobs=1): err= 0: pid=964669: Mon Jul 22 23:14:08 2024 00:33:32.513 write: IOPS=466, BW=117MiB/s (122MB/s)(1192MiB/10214msec); 0 zone resets 00:33:32.513 slat (usec): min=27, max=83674, avg=824.49, stdev=3577.43 00:33:32.513 clat (usec): min=1058, max=506826, avg=136200.06, stdev=99406.58 00:33:32.513 lat (usec): min=1110, max=506910, avg=137024.55, stdev=100279.74 00:33:32.513 clat percentiles (msec): 00:33:32.513 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 41], 00:33:32.513 | 30.00th=[ 58], 40.00th=[ 89], 50.00th=[ 124], 60.00th=[ 150], 00:33:32.513 | 70.00th=[ 188], 80.00th=[ 230], 90.00th=[ 275], 95.00th=[ 305], 00:33:32.513 | 99.00th=[ 426], 99.50th=[ 460], 99.90th=[ 502], 99.95th=[ 506], 00:33:32.513 | 99.99th=[ 506] 00:33:32.513 bw ( KiB/s): min=53248, max=195584, per=10.14%, avg=120396.80, stdev=41134.54, samples=20 00:33:32.513 iops : min= 208, max= 764, avg=470.30, stdev=160.68, samples=20 00:33:32.513 lat (msec) : 2=0.65%, 4=1.17%, 10=2.79%, 20=4.34%, 50=17.48% 00:33:32.513 lat (msec) : 100=17.33%, 250=39.80%, 500=16.34%, 750=0.08% 00:33:32.513 cpu : usr=2.09%, sys=2.67%, ctx=3732, majf=0, minf=1 00:33:32.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:32.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.513 issued rwts: total=0,4766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.513 job7: (groupid=0, jobs=1): err= 0: pid=964670: Mon Jul 22 23:14:08 2024 00:33:32.513 write: IOPS=444, BW=111MiB/s (117MB/s)(1132MiB/10182msec); 0 zone resets 00:33:32.513 slat (usec): min=30, max=96537, avg=1206.31, stdev=4018.76 00:33:32.513 clat (usec): min=1355, max=416774, avg=142563.75, stdev=81967.76 00:33:32.513 lat (usec): min=1472, max=416822, avg=143770.06, stdev=82649.84 00:33:32.513 clat percentiles (msec): 00:33:32.513 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 44], 20.00th=[ 63], 00:33:32.513 | 30.00th=[ 87], 40.00th=[ 111], 50.00th=[ 144], 60.00th=[ 161], 00:33:32.513 | 70.00th=[ 186], 80.00th=[ 209], 90.00th=[ 255], 95.00th=[ 292], 00:33:32.513 | 99.00th=[ 351], 99.50th=[ 372], 99.90th=[ 405], 99.95th=[ 405], 00:33:32.513 | 99.99th=[ 418] 00:33:32.513 bw ( KiB/s): min=59392, max=177664, per=9.62%, avg=114263.45, stdev=39021.07, samples=20 00:33:32.513 iops : min= 232, max= 694, avg=446.30, stdev=152.43, samples=20 00:33:32.513 lat (msec) : 2=0.07%, 4=0.49%, 10=1.33%, 20=2.10%, 50=9.50% 00:33:32.513 lat (msec) : 100=22.14%, 250=53.62%, 500=10.76% 00:33:32.513 cpu : usr=2.05%, sys=1.85%, ctx=2955, majf=0, minf=1 00:33:32.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:32.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.513 issued rwts: total=0,4526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.513 job8: (groupid=0, jobs=1): err= 0: pid=964671: Mon Jul 22 23:14:08 2024 00:33:32.513 write: IOPS=379, BW=95.0MiB/s (99.6MB/s)(972MiB/10228msec); 0 zone resets 00:33:32.513 slat (usec): min=26, max=42689, avg=1685.13, stdev=4503.08 00:33:32.513 clat (usec): min=1698, max=499431, avg=166619.84, stdev=87606.21 00:33:32.513 lat (usec): min=1769, max=499483, avg=168304.97, stdev=88630.10 00:33:32.513 clat percentiles (msec): 00:33:32.513 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 77], 00:33:32.513 | 30.00th=[ 121], 40.00th=[ 146], 50.00th=[ 188], 60.00th=[ 203], 00:33:32.513 | 70.00th=[ 215], 80.00th=[ 245], 90.00th=[ 271], 95.00th=[ 292], 00:33:32.513 | 99.00th=[ 347], 99.50th=[ 414], 99.90th=[ 485], 99.95th=[ 502], 00:33:32.513 | 99.99th=[ 502] 00:33:32.513 bw ( KiB/s): min=55296, max=169298, per=8.24%, avg=97860.10, stdev=34277.16, samples=20 00:33:32.513 iops : min= 216, max= 661, avg=382.25, stdev=133.86, samples=20 00:33:32.513 lat (msec) : 2=0.03%, 4=0.39%, 10=1.21%, 20=4.32%, 50=8.72% 00:33:32.513 lat (msec) : 100=10.27%, 250=56.69%, 500=18.37% 00:33:32.513 cpu : usr=1.91%, sys=1.51%, ctx=2284, majf=0, minf=1 00:33:32.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:33:32.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.513 issued rwts: total=0,3886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.513 job9: (groupid=0, jobs=1): err= 0: pid=964672: Mon Jul 22 23:14:08 2024 00:33:32.513 write: IOPS=424, BW=106MiB/s (111MB/s)(1086MiB/10236msec); 0 zone resets 00:33:32.513 slat (usec): min=21, max=54876, avg=1056.72, stdev=3770.08 00:33:32.513 clat (usec): min=1001, max=518152, avg=149522.68, stdev=94824.81 00:33:32.513 lat (usec): min=1022, max=518207, avg=150579.40, stdev=95771.14 00:33:32.513 clat percentiles (msec): 00:33:32.513 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 33], 20.00th=[ 56], 00:33:32.513 | 30.00th=[ 90], 40.00th=[ 120], 50.00th=[ 142], 60.00th=[ 163], 00:33:32.513 | 70.00th=[ 197], 80.00th=[ 224], 90.00th=[ 275], 95.00th=[ 330], 00:33:32.513 | 99.00th=[ 409], 99.50th=[ 435], 99.90th=[ 502], 99.95th=[ 502], 00:33:32.513 | 99.99th=[ 518] 00:33:32.513 bw ( KiB/s): min=53248, max=200704, per=9.22%, avg=109516.80, stdev=34023.89, samples=20 00:33:32.513 iops : min= 208, max= 784, avg=427.80, stdev=132.91, samples=20 00:33:32.513 lat (msec) : 2=0.30%, 4=0.32%, 10=1.80%, 20=3.06%, 50=11.88% 00:33:32.513 lat (msec) : 100=15.73%, 250=52.88%, 500=13.89%, 750=0.14% 00:33:32.513 cpu : usr=1.91%, sys=2.27%, ctx=3161, majf=0, minf=1 00:33:32.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:33:32.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.513 issued rwts: total=0,4342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.513 job10: (groupid=0, jobs=1): err= 0: pid=964674: Mon Jul 22 23:14:08 2024 00:33:32.513 write: IOPS=464, BW=116MiB/s (122MB/s)(1177MiB/10141msec); 0 zone resets 00:33:32.513 slat (usec): min=31, max=85925, avg=1015.67, stdev=3382.21 00:33:32.513 clat (usec): min=1157, max=366496, avg=136494.06, stdev=74725.14 00:33:32.513 lat (usec): min=1308, max=369230, avg=137509.73, stdev=75059.62 00:33:32.513 clat percentiles (msec): 00:33:32.513 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 69], 00:33:32.513 | 30.00th=[ 101], 40.00th=[ 110], 50.00th=[ 130], 60.00th=[ 157], 00:33:32.513 | 70.00th=[ 176], 80.00th=[ 201], 90.00th=[ 239], 95.00th=[ 268], 00:33:32.513 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 363], 00:33:32.513 | 99.99th=[ 368] 00:33:32.513 bw ( KiB/s): min=75776, max=195584, per=10.01%, avg=118860.80, stdev=35149.37, samples=20 00:33:32.513 iops : min= 296, max= 764, avg=464.30, stdev=137.30, samples=20 00:33:32.513 lat (msec) : 2=0.21%, 4=0.38%, 10=1.49%, 20=2.63%, 50=10.82% 00:33:32.513 lat (msec) : 100=14.43%, 250=62.37%, 500=7.67% 00:33:32.513 cpu : usr=1.97%, sys=2.05%, ctx=3049, majf=0, minf=1 00:33:32.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:32.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:33:32.513 issued rwts: total=0,4706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:32.513 00:33:32.513 Run status group 0 (all jobs): 00:33:32.513 WRITE: bw=1160MiB/s (1216MB/s), 94.4MiB/s-117MiB/s (99.0MB/s-122MB/s), io=11.6GiB (12.4GB), run=10084-10236msec 00:33:32.513 00:33:32.513 Disk stats (read/write): 00:33:32.513 nvme0n1: ios=50/7626, merge=0/0, ticks=1334/1249320, in_queue=1250654, util=99.02% 00:33:32.513 nvme10n1: ios=49/8164, merge=0/0, ticks=53/1222260, in_queue=1222313, util=97.19% 00:33:32.513 nvme1n1: ios=49/7997, merge=0/0, ticks=124/1242876, in_queue=1243000, util=97.99% 00:33:32.513 nvme2n1: ios=49/8239, merge=0/0, ticks=207/1242057, in_queue=1242264, util=98.93% 00:33:32.513 nvme3n1: ios=44/9079, merge=0/0, ticks=1854/1218114, in_queue=1219968, util=99.99% 00:33:32.513 nvme4n1: ios=48/8577, merge=0/0, ticks=761/1245861, in_queue=1246622, util=100.00% 00:33:32.513 nvme5n1: ios=15/9480, merge=0/0, ticks=105/1252269, in_queue=1252374, util=98.28% 00:33:32.513 nvme6n1: ios=43/9007, merge=0/0, ticks=1444/1246101, in_queue=1247545, util=100.00% 00:33:32.513 nvme7n1: ios=0/7706, merge=0/0, ticks=0/1237836, in_queue=1237836, util=98.70% 00:33:32.513 nvme8n1: ios=38/8604, merge=0/0, ticks=606/1242033, in_queue=1242639, util=100.00% 00:33:32.513 nvme9n1: ios=44/9406, merge=0/0, ticks=1971/1247648, in_queue=1249619, util=100.00% 00:33:32.513 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:33:32.513 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:33:32.513 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:32.513 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:32.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:32.513 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:32.514 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:33:32.775 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:32.775 23:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:33:33.038 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:33.038 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:33:33.303 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:33.303 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:33:33.566 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:33.566 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:33:33.825 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:33:33.826 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:33:33.826 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:33.826 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:33.826 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:33:33.826 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:33.826 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:33:33.826 23:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:33.826 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:33:33.826 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.826 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:33.826 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.826 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:33.826 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:33:34.086 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:34.086 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:33:34.346 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:33:34.346 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:34.346 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:33:34.606 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:33:34.607 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:34.607 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:34.868 23:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:34.868 rmmod nvme_tcp 00:33:34.868 rmmod nvme_fabrics 00:33:34.868 rmmod nvme_keyring 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 959535 ']' 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 959535 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 959535 ']' 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 959535 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 959535 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 959535' 00:33:34.868 killing process with pid 959535 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 959535 00:33:34.868 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 959535 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.807 23:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:37.714 00:33:37.714 real 1m3.197s 00:33:37.714 user 3m34.597s 00:33:37.714 sys 0m26.299s 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:33:37.714 ************************************ 00:33:37.714 END TEST nvmf_multiconnection 00:33:37.714 ************************************ 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:37.714 ************************************ 00:33:37.714 START TEST nvmf_initiator_timeout 00:33:37.714 ************************************ 00:33:37.714 23:14:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:33:37.714 * Looking for test storage... 00:33:37.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.714 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.973 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:33:37.974 23:14:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:41.288 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:41.288 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:41.288 Found net devices under 0000:84:00.0: cvl_0_0 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:41.288 Found net devices under 0000:84:00.1: cvl_0_1 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:41.288 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:41.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:33:41.289 00:33:41.289 --- 10.0.0.2 ping statistics --- 00:33:41.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.289 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:33:41.289 00:33:41.289 --- 10.0.0.1 ping statistics --- 00:33:41.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.289 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=967982 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 967982 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 967982 ']' 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:41.289 23:14:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.289 [2024-07-22 23:14:17.536097] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:33:41.289 [2024-07-22 23:14:17.536270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.567 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.567 [2024-07-22 23:14:17.696584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:41.567 [2024-07-22 23:14:17.854982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.567 [2024-07-22 23:14:17.855075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.567 [2024-07-22 23:14:17.855111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.567 [2024-07-22 23:14:17.855142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.567 [2024-07-22 23:14:17.855170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.567 [2024-07-22 23:14:17.855301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.567 [2024-07-22 23:14:17.855429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:41.567 [2024-07-22 23:14:17.855463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:41.567 [2024-07-22 23:14:17.855467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.837 Malloc0 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.837 Delay0 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.837 [2024-07-22 23:14:18.121208] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.837 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:42.098 [2024-07-22 23:14:18.149582] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.098 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.098 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:42.669 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:33:42.669 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:33:42.669 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:42.669 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:42.669 23:14:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=968406 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:33:44.578 23:14:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:33:44.835 [global] 00:33:44.835 thread=1 00:33:44.835 invalidate=1 00:33:44.835 rw=write 00:33:44.835 time_based=1 00:33:44.835 runtime=60 00:33:44.835 ioengine=libaio 00:33:44.835 direct=1 00:33:44.835 bs=4096 00:33:44.835 iodepth=1 00:33:44.835 norandommap=0 00:33:44.835 numjobs=1 00:33:44.835 00:33:44.835 verify_dump=1 00:33:44.835 verify_backlog=512 00:33:44.835 verify_state_save=0 00:33:44.835 do_verify=1 00:33:44.835 verify=crc32c-intel 00:33:44.835 [job0] 00:33:44.835 filename=/dev/nvme0n1 00:33:44.835 Could not set queue depth (nvme0n1) 00:33:44.835 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:44.835 fio-3.35 00:33:44.835 Starting 1 thread 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:48.128 true 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:48.128 true 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:48.128 true 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:48.128 true 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.128 23:14:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:50.722 true 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:50.722 true 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:50.722 true 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:50.722 true 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:33:50.722 23:14:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 968406 00:34:46.974 00:34:46.974 job0: (groupid=0, jobs=1): err= 0: pid=968480: Mon Jul 22 23:15:21 2024 00:34:46.974 read: IOPS=68, BW=275KiB/s (282kB/s)(16.1MiB/60009msec) 00:34:46.974 slat (usec): min=8, max=9763, avg=30.33, stdev=151.67 00:34:46.974 clat (usec): min=294, max=41054k, avg=14084.22, stdev=638779.29 00:34:46.974 lat (usec): min=302, max=41054k, avg=14114.54, stdev=638779.14 00:34:46.974 clat percentiles (usec): 00:34:46.974 | 1.00th=[ 355], 5.00th=[ 396], 10.00th=[ 424], 00:34:46.974 | 20.00th=[ 453], 30.00th=[ 478], 40.00th=[ 490], 00:34:46.974 | 50.00th=[ 502], 60.00th=[ 510], 70.00th=[ 519], 00:34:46.974 | 80.00th=[ 529], 90.00th=[ 594], 95.00th=[ 41157], 00:34:46.974 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:34:46.974 | 99.95th=[ 42206], 99.99th=[17112761] 00:34:46.974 write: IOPS=76, BW=307KiB/s (315kB/s)(18.0MiB/60009msec); 0 zone resets 00:34:46.974 slat (nsec): min=9778, max=94882, avg=24774.23, stdev=8341.79 00:34:46.974 clat (usec): min=215, max=3817, avg=330.30, stdev=93.82 00:34:46.974 lat (usec): min=226, max=3857, avg=355.08, stdev=95.74 00:34:46.974 clat percentiles (usec): 00:34:46.974 | 1.00th=[ 239], 5.00th=[ 260], 10.00th=[ 273], 20.00th=[ 293], 00:34:46.974 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 334], 00:34:46.974 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 408], 00:34:46.974 | 99.00th=[ 449], 99.50th=[ 474], 99.90th=[ 537], 99.95th=[ 3032], 00:34:46.974 | 99.99th=[ 3818] 00:34:46.974 bw ( KiB/s): min= 3336, max= 5976, per=100.00%, avg=4608.00, stdev=785.84, samples=8 00:34:46.974 iops : min= 834, max= 1494, avg=1152.00, stdev=196.46, samples=8 00:34:46.974 lat (usec) : 250=1.79%, 500=73.64%, 750=20.17%, 1000=0.01% 00:34:46.974 lat (msec) : 2=0.07%, 4=0.05%, 50=4.27%, >=2000=0.01% 00:34:46.974 cpu : usr=0.23%, sys=0.41%, ctx=8740, majf=0, minf=2 00:34:46.974 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.974 issued rwts: total=4131,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.974 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.974 00:34:46.974 Run status group 0 (all jobs): 00:34:46.974 READ: bw=275KiB/s (282kB/s), 275KiB/s-275KiB/s (282kB/s-282kB/s), io=16.1MiB (16.9MB), run=60009-60009msec 00:34:46.974 WRITE: bw=307KiB/s (315kB/s), 307KiB/s-307KiB/s (315kB/s-315kB/s), io=18.0MiB (18.9MB), run=60009-60009msec 00:34:46.974 00:34:46.974 Disk stats (read/write): 00:34:46.974 nvme0n1: ios=4221/4608, merge=0/0, ticks=17097/1403, in_queue=18500, util=99.83% 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:46.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:34:46.974 nvmf hotplug test: fio successful as expected 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:46.974 rmmod nvme_tcp 00:34:46.974 rmmod nvme_fabrics 00:34:46.974 rmmod nvme_keyring 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 967982 ']' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 967982 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 967982 ']' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 967982 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 967982 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 967982' 00:34:46.974 killing process with pid 967982 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 967982 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 967982 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.974 23:15:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:47.912 00:34:47.912 real 1m10.078s 00:34:47.912 user 4m13.422s 00:34:47.912 sys 0m7.648s 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:47.912 ************************************ 00:34:47.912 END TEST nvmf_initiator_timeout 00:34:47.912 ************************************ 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:34:47.912 23:15:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.200 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:51.201 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:51.201 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:51.201 Found net devices under 0000:84:00.0: cvl_0_0 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:51.201 Found net devices under 0000:84:00.1: cvl_0_1 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:34:51.201 ************************************ 00:34:51.201 START TEST nvmf_perf_adq 00:34:51.201 ************************************ 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:34:51.201 * Looking for test storage... 00:34:51.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.201 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.202 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.202 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.202 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:34:51.202 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:34:51.202 23:15:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:54.493 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.493 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:34:54.493 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:54.493 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:54.493 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:54.493 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:54.494 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:54.494 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:54.494 Found net devices under 0000:84:00.0: cvl_0_0 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:54.494 Found net devices under 0000:84:00.1: cvl_0_1 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:34:54.494 23:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:34:55.065 23:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:34:57.606 23:15:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:02.890 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:02.891 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:02.891 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:02.891 Found net devices under 0000:84:00.0: cvl_0_0 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:02.891 Found net devices under 0000:84:00.1: cvl_0_1 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:02.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:35:02.891 00:35:02.891 --- 10.0.0.2 ping statistics --- 00:35:02.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.891 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:35:02.891 00:35:02.891 --- 10.0.0.1 ping statistics --- 00:35:02.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.891 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=980215 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 980215 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 980215 ']' 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.891 23:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:02.891 [2024-07-22 23:15:38.625780] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:35:02.891 [2024-07-22 23:15:38.625885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.891 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.891 [2024-07-22 23:15:38.742022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:02.891 [2024-07-22 23:15:38.897391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.891 [2024-07-22 23:15:38.897457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.891 [2024-07-22 23:15:38.897486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.891 [2024-07-22 23:15:38.897504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.891 [2024-07-22 23:15:38.897519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.891 [2024-07-22 23:15:38.897587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.891 [2024-07-22 23:15:38.897653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.891 [2024-07-22 23:15:38.897726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:02.892 [2024-07-22 23:15:38.897730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.892 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:03.152 [2024-07-22 23:15:39.241484] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:03.152 Malloc1 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:03.152 [2024-07-22 23:15:39.304027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=980361 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:35:03.152 23:15:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:03.152 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.062 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:35:05.062 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.062 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:05.062 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.062 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:35:05.062 "tick_rate": 2700000000, 00:35:05.062 "poll_groups": [ 00:35:05.062 { 00:35:05.062 "name": "nvmf_tgt_poll_group_000", 00:35:05.062 "admin_qpairs": 1, 00:35:05.062 "io_qpairs": 1, 00:35:05.062 "current_admin_qpairs": 1, 00:35:05.062 "current_io_qpairs": 1, 00:35:05.062 "pending_bdev_io": 0, 00:35:05.062 "completed_nvme_io": 14998, 00:35:05.062 "transports": [ 00:35:05.062 { 00:35:05.062 "trtype": "TCP" 00:35:05.062 } 00:35:05.062 ] 00:35:05.062 }, 00:35:05.062 { 00:35:05.062 "name": "nvmf_tgt_poll_group_001", 00:35:05.062 "admin_qpairs": 0, 00:35:05.062 "io_qpairs": 1, 00:35:05.062 "current_admin_qpairs": 0, 00:35:05.062 "current_io_qpairs": 1, 00:35:05.062 "pending_bdev_io": 0, 00:35:05.062 "completed_nvme_io": 15141, 00:35:05.062 "transports": [ 00:35:05.062 { 00:35:05.062 "trtype": "TCP" 00:35:05.062 } 00:35:05.062 ] 00:35:05.062 }, 00:35:05.062 { 00:35:05.062 "name": "nvmf_tgt_poll_group_002", 00:35:05.062 "admin_qpairs": 0, 00:35:05.062 "io_qpairs": 1, 00:35:05.062 "current_admin_qpairs": 0, 00:35:05.062 "current_io_qpairs": 1, 00:35:05.062 "pending_bdev_io": 0, 00:35:05.062 "completed_nvme_io": 14982, 00:35:05.062 "transports": [ 00:35:05.062 { 00:35:05.062 "trtype": "TCP" 00:35:05.062 } 00:35:05.062 ] 00:35:05.062 }, 00:35:05.062 { 00:35:05.062 "name": "nvmf_tgt_poll_group_003", 00:35:05.062 "admin_qpairs": 0, 00:35:05.062 "io_qpairs": 1, 00:35:05.062 "current_admin_qpairs": 0, 00:35:05.062 "current_io_qpairs": 1, 00:35:05.062 "pending_bdev_io": 0, 00:35:05.062 "completed_nvme_io": 14974, 00:35:05.062 "transports": [ 00:35:05.062 { 00:35:05.062 "trtype": "TCP" 00:35:05.062 } 00:35:05.062 ] 00:35:05.062 } 00:35:05.062 ] 00:35:05.062 }' 00:35:05.062 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:35:05.062 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:35:05.322 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:35:05.322 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:35:05.322 23:15:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 980361 00:35:13.456 Initializing NVMe Controllers 00:35:13.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:13.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:35:13.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:35:13.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:35:13.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:35:13.456 Initialization complete. Launching workers. 00:35:13.456 ======================================================== 00:35:13.456 Latency(us) 00:35:13.456 Device Information : IOPS MiB/s Average min max 00:35:13.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7874.90 30.76 8128.22 2927.37 13523.90 00:35:13.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7977.50 31.16 8022.16 2791.07 13457.40 00:35:13.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7964.80 31.11 8036.86 3046.71 13380.76 00:35:13.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7945.90 31.04 8056.36 3184.06 13007.01 00:35:13.456 ======================================================== 00:35:13.456 Total : 31763.09 124.07 8060.70 2791.07 13523.90 00:35:13.456 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:13.456 rmmod nvme_tcp 00:35:13.456 rmmod nvme_fabrics 00:35:13.456 rmmod nvme_keyring 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 980215 ']' 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 980215 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 980215 ']' 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 980215 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980215 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980215' 00:35:13.456 killing process with pid 980215 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 980215 00:35:13.456 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 980215 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.716 23:15:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.258 23:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:16.258 23:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:35:16.258 23:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:35:16.518 23:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:35:19.086 23:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:24.376 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:24.377 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:24.377 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:24.377 Found net devices under 0000:84:00.0: cvl_0_0 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:24.377 Found net devices under 0000:84:00.1: cvl_0_1 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:24.377 23:15:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:24.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:35:24.377 00:35:24.377 --- 10.0.0.2 ping statistics --- 00:35:24.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.377 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:24.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:35:24.377 00:35:24.377 --- 10.0.0.1 ping statistics --- 00:35:24.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.377 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:35:24.377 net.core.busy_poll = 1 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:35:24.377 net.core.busy_read = 1 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=982840 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 982840 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 982840 ']' 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.377 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:24.378 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.378 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:24.378 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.378 [2024-07-22 23:16:00.389668] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:35:24.378 [2024-07-22 23:16:00.389843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.378 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.378 [2024-07-22 23:16:00.546586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:24.636 [2024-07-22 23:16:00.705225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.636 [2024-07-22 23:16:00.705288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.636 [2024-07-22 23:16:00.705319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.636 [2024-07-22 23:16:00.705338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.636 [2024-07-22 23:16:00.705355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.636 [2024-07-22 23:16:00.705424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.636 [2024-07-22 23:16:00.705487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.636 [2024-07-22 23:16:00.705548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:24.636 [2024-07-22 23:16:00.705552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.637 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:24.637 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:35:24.637 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:24.637 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:24.637 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.894 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.894 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:35:24.894 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:35:24.894 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:35:24.894 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.894 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.894 23:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.894 [2024-07-22 23:16:01.161513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:24.894 Malloc1 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.894 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:25.153 [2024-07-22 23:16:01.220229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=982991 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:35:25.153 23:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:25.153 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:35:27.054 "tick_rate": 2700000000, 00:35:27.054 "poll_groups": [ 00:35:27.054 { 00:35:27.054 "name": "nvmf_tgt_poll_group_000", 00:35:27.054 "admin_qpairs": 1, 00:35:27.054 "io_qpairs": 2, 00:35:27.054 "current_admin_qpairs": 1, 00:35:27.054 "current_io_qpairs": 2, 00:35:27.054 "pending_bdev_io": 0, 00:35:27.054 "completed_nvme_io": 19208, 00:35:27.054 "transports": [ 00:35:27.054 { 00:35:27.054 "trtype": "TCP" 00:35:27.054 } 00:35:27.054 ] 00:35:27.054 }, 00:35:27.054 { 00:35:27.054 "name": "nvmf_tgt_poll_group_001", 00:35:27.054 "admin_qpairs": 0, 00:35:27.054 "io_qpairs": 2, 00:35:27.054 "current_admin_qpairs": 0, 00:35:27.054 "current_io_qpairs": 2, 00:35:27.054 "pending_bdev_io": 0, 00:35:27.054 "completed_nvme_io": 18791, 00:35:27.054 "transports": [ 00:35:27.054 { 00:35:27.054 "trtype": "TCP" 00:35:27.054 } 00:35:27.054 ] 00:35:27.054 }, 00:35:27.054 { 00:35:27.054 "name": "nvmf_tgt_poll_group_002", 00:35:27.054 "admin_qpairs": 0, 00:35:27.054 "io_qpairs": 0, 00:35:27.054 "current_admin_qpairs": 0, 00:35:27.054 "current_io_qpairs": 0, 00:35:27.054 "pending_bdev_io": 0, 00:35:27.054 "completed_nvme_io": 0, 00:35:27.054 "transports": [ 00:35:27.054 { 00:35:27.054 "trtype": "TCP" 00:35:27.054 } 00:35:27.054 ] 00:35:27.054 }, 00:35:27.054 { 00:35:27.054 "name": "nvmf_tgt_poll_group_003", 00:35:27.054 "admin_qpairs": 0, 00:35:27.054 "io_qpairs": 0, 00:35:27.054 "current_admin_qpairs": 0, 00:35:27.054 "current_io_qpairs": 0, 00:35:27.054 "pending_bdev_io": 0, 00:35:27.054 "completed_nvme_io": 0, 00:35:27.054 "transports": [ 00:35:27.054 { 00:35:27.054 "trtype": "TCP" 00:35:27.054 } 00:35:27.054 ] 00:35:27.054 } 00:35:27.054 ] 00:35:27.054 }' 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:35:27.054 23:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 982991 00:35:35.168 Initializing NVMe Controllers 00:35:35.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:35.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:35:35.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:35:35.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:35:35.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:35:35.168 Initialization complete. Launching workers. 00:35:35.168 ======================================================== 00:35:35.168 Latency(us) 00:35:35.168 Device Information : IOPS MiB/s Average min max 00:35:35.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4768.70 18.63 13426.07 2560.29 59498.44 00:35:35.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5368.80 20.97 11923.75 2508.64 59340.90 00:35:35.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5441.40 21.26 11766.64 2338.74 57950.30 00:35:35.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4902.50 19.15 13099.68 2569.07 58712.47 00:35:35.168 ======================================================== 00:35:35.168 Total : 20481.40 80.01 12513.27 2338.74 59498.44 00:35:35.168 00:35:35.168 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:35:35.168 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:35.168 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:35:35.168 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:35.168 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:35:35.168 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:35.168 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:35.168 rmmod nvme_tcp 00:35:35.168 rmmod nvme_fabrics 00:35:35.168 rmmod nvme_keyring 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 982840 ']' 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 982840 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 982840 ']' 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 982840 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 982840 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 982840' 00:35:35.428 killing process with pid 982840 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 982840 00:35:35.428 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 982840 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.688 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.222 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:38.222 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:38.222 00:35:38.222 real 0m46.685s 00:35:38.222 user 2m43.222s 00:35:38.222 sys 0m11.393s 00:35:38.222 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:38.223 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:35:38.223 ************************************ 00:35:38.223 END TEST nvmf_perf_adq 00:35:38.223 ************************************ 00:35:38.223 23:16:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:35:38.223 23:16:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:35:38.223 23:16:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:38.223 23:16:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:38.223 23:16:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:35:38.223 ************************************ 00:35:38.223 START TEST nvmf_shutdown 00:35:38.223 ************************************ 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:35:38.223 * Looking for test storage... 00:35:38.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:38.223 ************************************ 00:35:38.223 START TEST nvmf_shutdown_tc1 00:35:38.223 ************************************ 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:35:38.223 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:41.511 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:41.512 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:41.512 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:41.512 Found net devices under 0000:84:00.0: cvl_0_0 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:41.512 Found net devices under 0000:84:00.1: cvl_0_1 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:41.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:35:41.512 00:35:41.512 --- 10.0.0.2 ping statistics --- 00:35:41.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.512 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:35:41.512 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:35:41.513 00:35:41.513 --- 10.0.0.1 ping statistics --- 00:35:41.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.513 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=986164 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 986164 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 986164 ']' 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:41.513 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:41.513 [2024-07-22 23:16:17.508592] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:35:41.513 [2024-07-22 23:16:17.508692] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:41.513 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.513 [2024-07-22 23:16:17.602167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:41.513 [2024-07-22 23:16:17.713342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:41.513 [2024-07-22 23:16:17.713410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:41.513 [2024-07-22 23:16:17.713429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:41.513 [2024-07-22 23:16:17.713445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:41.513 [2024-07-22 23:16:17.713459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:41.513 [2024-07-22 23:16:17.713557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.513 [2024-07-22 23:16:17.713617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.513 [2024-07-22 23:16:17.713677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:35:41.513 [2024-07-22 23:16:17.713680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:41.773 [2024-07-22 23:16:17.901645] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.773 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:41.773 Malloc1 00:35:41.773 [2024-07-22 23:16:18.012412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.773 Malloc2 00:35:42.032 Malloc3 00:35:42.032 Malloc4 00:35:42.032 Malloc5 00:35:42.032 Malloc6 00:35:42.032 Malloc7 00:35:42.292 Malloc8 00:35:42.292 Malloc9 00:35:42.292 Malloc10 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=986341 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 986341 /var/tmp/bdevperf.sock 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 986341 ']' 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:42.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.292 { 00:35:42.292 "params": { 00:35:42.292 "name": "Nvme$subsystem", 00:35:42.292 "trtype": "$TEST_TRANSPORT", 00:35:42.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.292 "adrfam": "ipv4", 00:35:42.292 "trsvcid": "$NVMF_PORT", 00:35:42.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.292 "hdgst": ${hdgst:-false}, 00:35:42.292 "ddgst": ${ddgst:-false} 00:35:42.292 }, 00:35:42.292 "method": "bdev_nvme_attach_controller" 00:35:42.292 } 00:35:42.292 EOF 00:35:42.292 )") 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.292 { 00:35:42.292 "params": { 00:35:42.292 "name": "Nvme$subsystem", 00:35:42.292 "trtype": "$TEST_TRANSPORT", 00:35:42.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.292 "adrfam": "ipv4", 00:35:42.292 "trsvcid": "$NVMF_PORT", 00:35:42.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.292 "hdgst": ${hdgst:-false}, 00:35:42.292 "ddgst": ${ddgst:-false} 00:35:42.292 }, 00:35:42.292 "method": "bdev_nvme_attach_controller" 00:35:42.292 } 00:35:42.292 EOF 00:35:42.292 )") 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.292 { 00:35:42.292 "params": { 00:35:42.292 "name": "Nvme$subsystem", 00:35:42.292 "trtype": "$TEST_TRANSPORT", 00:35:42.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.292 "adrfam": "ipv4", 00:35:42.292 "trsvcid": "$NVMF_PORT", 00:35:42.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.292 "hdgst": ${hdgst:-false}, 00:35:42.292 "ddgst": ${ddgst:-false} 00:35:42.292 }, 00:35:42.292 "method": "bdev_nvme_attach_controller" 00:35:42.292 } 00:35:42.292 EOF 00:35:42.292 )") 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.292 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.292 { 00:35:42.292 "params": { 00:35:42.292 "name": "Nvme$subsystem", 00:35:42.292 "trtype": "$TEST_TRANSPORT", 00:35:42.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.292 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "$NVMF_PORT", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.293 "hdgst": ${hdgst:-false}, 00:35:42.293 "ddgst": ${ddgst:-false} 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 } 00:35:42.293 EOF 00:35:42.293 )") 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.293 { 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme$subsystem", 00:35:42.293 "trtype": "$TEST_TRANSPORT", 00:35:42.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "$NVMF_PORT", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.293 "hdgst": ${hdgst:-false}, 00:35:42.293 "ddgst": ${ddgst:-false} 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 } 00:35:42.293 EOF 00:35:42.293 )") 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.293 { 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme$subsystem", 00:35:42.293 "trtype": "$TEST_TRANSPORT", 00:35:42.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "$NVMF_PORT", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.293 "hdgst": ${hdgst:-false}, 00:35:42.293 "ddgst": ${ddgst:-false} 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 } 00:35:42.293 EOF 00:35:42.293 )") 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.293 { 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme$subsystem", 00:35:42.293 "trtype": "$TEST_TRANSPORT", 00:35:42.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "$NVMF_PORT", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.293 "hdgst": ${hdgst:-false}, 00:35:42.293 "ddgst": ${ddgst:-false} 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 } 00:35:42.293 EOF 00:35:42.293 )") 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.293 { 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme$subsystem", 00:35:42.293 "trtype": "$TEST_TRANSPORT", 00:35:42.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "$NVMF_PORT", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.293 "hdgst": ${hdgst:-false}, 00:35:42.293 "ddgst": ${ddgst:-false} 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 } 00:35:42.293 EOF 00:35:42.293 )") 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.293 { 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme$subsystem", 00:35:42.293 "trtype": "$TEST_TRANSPORT", 00:35:42.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "$NVMF_PORT", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.293 "hdgst": ${hdgst:-false}, 00:35:42.293 "ddgst": ${ddgst:-false} 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 } 00:35:42.293 EOF 00:35:42.293 )") 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.293 { 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme$subsystem", 00:35:42.293 "trtype": "$TEST_TRANSPORT", 00:35:42.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "$NVMF_PORT", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.293 "hdgst": ${hdgst:-false}, 00:35:42.293 "ddgst": ${ddgst:-false} 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 } 00:35:42.293 EOF 00:35:42.293 )") 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:35:42.293 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme1", 00:35:42.293 "trtype": "tcp", 00:35:42.293 "traddr": "10.0.0.2", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "4420", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.293 "hdgst": false, 00:35:42.293 "ddgst": false 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 },{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme2", 00:35:42.293 "trtype": "tcp", 00:35:42.293 "traddr": "10.0.0.2", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "4420", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:42.293 "hdgst": false, 00:35:42.293 "ddgst": false 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 },{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme3", 00:35:42.293 "trtype": "tcp", 00:35:42.293 "traddr": "10.0.0.2", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "4420", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:35:42.293 "hdgst": false, 00:35:42.293 "ddgst": false 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 },{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme4", 00:35:42.293 "trtype": "tcp", 00:35:42.293 "traddr": "10.0.0.2", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "4420", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:35:42.293 "hdgst": false, 00:35:42.293 "ddgst": false 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 },{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme5", 00:35:42.293 "trtype": "tcp", 00:35:42.293 "traddr": "10.0.0.2", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "4420", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:35:42.293 "hdgst": false, 00:35:42.293 "ddgst": false 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 },{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme6", 00:35:42.293 "trtype": "tcp", 00:35:42.293 "traddr": "10.0.0.2", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "4420", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:35:42.293 "hdgst": false, 00:35:42.293 "ddgst": false 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 },{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme7", 00:35:42.293 "trtype": "tcp", 00:35:42.293 "traddr": "10.0.0.2", 00:35:42.293 "adrfam": "ipv4", 00:35:42.293 "trsvcid": "4420", 00:35:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:35:42.293 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:35:42.293 "hdgst": false, 00:35:42.293 "ddgst": false 00:35:42.293 }, 00:35:42.293 "method": "bdev_nvme_attach_controller" 00:35:42.293 },{ 00:35:42.293 "params": { 00:35:42.293 "name": "Nvme8", 00:35:42.293 "trtype": "tcp", 00:35:42.294 "traddr": "10.0.0.2", 00:35:42.294 "adrfam": "ipv4", 00:35:42.294 "trsvcid": "4420", 00:35:42.294 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:35:42.294 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:35:42.294 "hdgst": false, 00:35:42.294 "ddgst": false 00:35:42.294 }, 00:35:42.294 "method": "bdev_nvme_attach_controller" 00:35:42.294 },{ 00:35:42.294 "params": { 00:35:42.294 "name": "Nvme9", 00:35:42.294 "trtype": "tcp", 00:35:42.294 "traddr": "10.0.0.2", 00:35:42.294 "adrfam": "ipv4", 00:35:42.294 "trsvcid": "4420", 00:35:42.294 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:35:42.294 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:35:42.294 "hdgst": false, 00:35:42.294 "ddgst": false 00:35:42.294 }, 00:35:42.294 "method": "bdev_nvme_attach_controller" 00:35:42.294 },{ 00:35:42.294 "params": { 00:35:42.294 "name": "Nvme10", 00:35:42.294 "trtype": "tcp", 00:35:42.294 "traddr": "10.0.0.2", 00:35:42.294 "adrfam": "ipv4", 00:35:42.294 "trsvcid": "4420", 00:35:42.294 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:35:42.294 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:35:42.294 "hdgst": false, 00:35:42.294 "ddgst": false 00:35:42.294 }, 00:35:42.294 "method": "bdev_nvme_attach_controller" 00:35:42.294 }' 00:35:42.294 [2024-07-22 23:16:18.584040] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:35:42.294 [2024-07-22 23:16:18.584128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:42.552 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.552 [2024-07-22 23:16:18.662276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.552 [2024-07-22 23:16:18.772769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 986341 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:35:45.115 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:35:46.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 986341 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 986164 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.051 "ddgst": ${ddgst:-false} 00:35:46.051 }, 00:35:46.051 "method": "bdev_nvme_attach_controller" 00:35:46.051 } 00:35:46.051 EOF 00:35:46.051 )") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.051 "ddgst": ${ddgst:-false} 00:35:46.051 }, 00:35:46.051 "method": "bdev_nvme_attach_controller" 00:35:46.051 } 00:35:46.051 EOF 00:35:46.051 )") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.051 "ddgst": ${ddgst:-false} 00:35:46.051 }, 00:35:46.051 "method": "bdev_nvme_attach_controller" 00:35:46.051 } 00:35:46.051 EOF 00:35:46.051 )") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.051 "ddgst": ${ddgst:-false} 00:35:46.051 }, 00:35:46.051 "method": "bdev_nvme_attach_controller" 00:35:46.051 } 00:35:46.051 EOF 00:35:46.051 )") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.051 "ddgst": ${ddgst:-false} 00:35:46.051 }, 00:35:46.051 "method": "bdev_nvme_attach_controller" 00:35:46.051 } 00:35:46.051 EOF 00:35:46.051 )") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.051 "ddgst": ${ddgst:-false} 00:35:46.051 }, 00:35:46.051 "method": "bdev_nvme_attach_controller" 00:35:46.051 } 00:35:46.051 EOF 00:35:46.051 )") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.051 "ddgst": ${ddgst:-false} 00:35:46.051 }, 00:35:46.051 "method": "bdev_nvme_attach_controller" 00:35:46.051 } 00:35:46.051 EOF 00:35:46.051 )") 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.051 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.051 { 00:35:46.051 "params": { 00:35:46.051 "name": "Nvme$subsystem", 00:35:46.051 "trtype": "$TEST_TRANSPORT", 00:35:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.051 "adrfam": "ipv4", 00:35:46.051 "trsvcid": "$NVMF_PORT", 00:35:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.051 "hdgst": ${hdgst:-false}, 00:35:46.052 "ddgst": ${ddgst:-false} 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 } 00:35:46.052 EOF 00:35:46.052 )") 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.052 { 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme$subsystem", 00:35:46.052 "trtype": "$TEST_TRANSPORT", 00:35:46.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "$NVMF_PORT", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.052 "hdgst": ${hdgst:-false}, 00:35:46.052 "ddgst": ${ddgst:-false} 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 } 00:35:46.052 EOF 00:35:46.052 )") 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.052 { 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme$subsystem", 00:35:46.052 "trtype": "$TEST_TRANSPORT", 00:35:46.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "$NVMF_PORT", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.052 "hdgst": ${hdgst:-false}, 00:35:46.052 "ddgst": ${ddgst:-false} 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 } 00:35:46.052 EOF 00:35:46.052 )") 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:35:46.052 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme1", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme2", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme3", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme4", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme5", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme6", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme7", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme8", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme9", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 },{ 00:35:46.052 "params": { 00:35:46.052 "name": "Nvme10", 00:35:46.052 "trtype": "tcp", 00:35:46.052 "traddr": "10.0.0.2", 00:35:46.052 "adrfam": "ipv4", 00:35:46.052 "trsvcid": "4420", 00:35:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:35:46.052 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:35:46.052 "hdgst": false, 00:35:46.052 "ddgst": false 00:35:46.052 }, 00:35:46.052 "method": "bdev_nvme_attach_controller" 00:35:46.052 }' 00:35:46.052 [2024-07-22 23:16:22.206619] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:35:46.052 [2024-07-22 23:16:22.206707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986780 ] 00:35:46.052 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.052 [2024-07-22 23:16:22.288835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.311 [2024-07-22 23:16:22.399586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.211 Running I/O for 1 seconds... 00:35:49.146 00:35:49.146 Latency(us) 00:35:49.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.146 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme1n1 : 1.11 173.30 10.83 0.00 0.00 363542.76 44273.21 284280.60 00:35:49.146 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme2n1 : 1.14 173.51 10.84 0.00 0.00 344234.89 28738.75 309135.74 00:35:49.146 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme3n1 : 1.09 175.87 10.99 0.00 0.00 341967.96 29515.47 337097.77 00:35:49.146 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme4n1 : 1.10 174.82 10.93 0.00 0.00 336009.54 25049.32 313796.08 00:35:49.146 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme5n1 : 1.20 160.31 10.02 0.00 0.00 361100.83 25826.04 351078.78 00:35:49.146 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme6n1 : 1.19 161.47 10.09 0.00 0.00 349857.56 29321.29 349525.33 00:35:49.146 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme7n1 : 1.25 205.39 12.84 0.00 0.00 270548.20 20777.34 344865.00 00:35:49.146 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme8n1 : 1.25 204.24 12.77 0.00 0.00 266133.43 19806.44 351078.78 00:35:49.146 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme9n1 : 1.21 158.80 9.93 0.00 0.00 332045.02 32234.00 365059.79 00:35:49.146 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:49.146 Verification LBA range: start 0x0 length 0x400 00:35:49.146 Nvme10n1 : 1.26 202.74 12.67 0.00 0.00 256463.17 7524.50 382147.70 00:35:49.146 =================================================================================================================== 00:35:49.146 Total : 1790.45 111.90 0.00 0.00 317012.33 7524.50 382147.70 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:49.405 rmmod nvme_tcp 00:35:49.405 rmmod nvme_fabrics 00:35:49.405 rmmod nvme_keyring 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 986164 ']' 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 986164 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 986164 ']' 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 986164 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:49.405 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 986164 00:35:49.663 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:49.663 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:49.663 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 986164' 00:35:49.663 killing process with pid 986164 00:35:49.663 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 986164 00:35:49.663 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 986164 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.231 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:52.139 00:35:52.139 real 0m14.180s 00:35:52.139 user 0m40.600s 00:35:52.139 sys 0m4.439s 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:52.139 ************************************ 00:35:52.139 END TEST nvmf_shutdown_tc1 00:35:52.139 ************************************ 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:52.139 ************************************ 00:35:52.139 START TEST nvmf_shutdown_tc2 00:35:52.139 ************************************ 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.139 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:52.140 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:52.140 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:52.401 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.401 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:52.402 Found net devices under 0000:84:00.0: cvl_0_0 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:52.402 Found net devices under 0000:84:00.1: cvl_0_1 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:52.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:52.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:35:52.402 00:35:52.402 --- 10.0.0.2 ping statistics --- 00:35:52.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.402 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:52.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:52.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:35:52.402 00:35:52.402 --- 10.0.0.1 ping statistics --- 00:35:52.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.402 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=987544 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 987544 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 987544 ']' 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:52.402 23:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.662 [2024-07-22 23:16:28.791995] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:35:52.662 [2024-07-22 23:16:28.792162] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.662 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.662 [2024-07-22 23:16:28.912896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:52.920 [2024-07-22 23:16:29.026024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.920 [2024-07-22 23:16:29.026087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.920 [2024-07-22 23:16:29.026109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.920 [2024-07-22 23:16:29.026126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.920 [2024-07-22 23:16:29.026140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.920 [2024-07-22 23:16:29.026247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:52.920 [2024-07-22 23:16:29.026322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:52.920 [2024-07-22 23:16:29.026379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:35:52.920 [2024-07-22 23:16:29.026384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.177 [2024-07-22 23:16:29.288857] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.177 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.177 Malloc1 00:35:53.177 [2024-07-22 23:16:29.387717] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.177 Malloc2 00:35:53.177 Malloc3 00:35:53.435 Malloc4 00:35:53.435 Malloc5 00:35:53.435 Malloc6 00:35:53.435 Malloc7 00:35:53.694 Malloc8 00:35:53.694 Malloc9 00:35:53.694 Malloc10 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=987721 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 987721 /var/tmp/bdevperf.sock 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 987721 ']' 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:53.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.694 { 00:35:53.694 "params": { 00:35:53.694 "name": "Nvme$subsystem", 00:35:53.694 "trtype": "$TEST_TRANSPORT", 00:35:53.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.694 "adrfam": "ipv4", 00:35:53.694 "trsvcid": "$NVMF_PORT", 00:35:53.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.694 "hdgst": ${hdgst:-false}, 00:35:53.694 "ddgst": ${ddgst:-false} 00:35:53.694 }, 00:35:53.694 "method": "bdev_nvme_attach_controller" 00:35:53.694 } 00:35:53.694 EOF 00:35:53.694 )") 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.694 { 00:35:53.694 "params": { 00:35:53.694 "name": "Nvme$subsystem", 00:35:53.694 "trtype": "$TEST_TRANSPORT", 00:35:53.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.694 "adrfam": "ipv4", 00:35:53.694 "trsvcid": "$NVMF_PORT", 00:35:53.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.694 "hdgst": ${hdgst:-false}, 00:35:53.694 "ddgst": ${ddgst:-false} 00:35:53.694 }, 00:35:53.694 "method": "bdev_nvme_attach_controller" 00:35:53.694 } 00:35:53.694 EOF 00:35:53.694 )") 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.694 { 00:35:53.694 "params": { 00:35:53.694 "name": "Nvme$subsystem", 00:35:53.694 "trtype": "$TEST_TRANSPORT", 00:35:53.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.694 "adrfam": "ipv4", 00:35:53.694 "trsvcid": "$NVMF_PORT", 00:35:53.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.694 "hdgst": ${hdgst:-false}, 00:35:53.694 "ddgst": ${ddgst:-false} 00:35:53.694 }, 00:35:53.694 "method": "bdev_nvme_attach_controller" 00:35:53.694 } 00:35:53.694 EOF 00:35:53.694 )") 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.694 { 00:35:53.694 "params": { 00:35:53.694 "name": "Nvme$subsystem", 00:35:53.694 "trtype": "$TEST_TRANSPORT", 00:35:53.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.694 "adrfam": "ipv4", 00:35:53.694 "trsvcid": "$NVMF_PORT", 00:35:53.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.694 "hdgst": ${hdgst:-false}, 00:35:53.694 "ddgst": ${ddgst:-false} 00:35:53.694 }, 00:35:53.694 "method": "bdev_nvme_attach_controller" 00:35:53.694 } 00:35:53.694 EOF 00:35:53.694 )") 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.694 { 00:35:53.694 "params": { 00:35:53.694 "name": "Nvme$subsystem", 00:35:53.694 "trtype": "$TEST_TRANSPORT", 00:35:53.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.694 "adrfam": "ipv4", 00:35:53.694 "trsvcid": "$NVMF_PORT", 00:35:53.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.694 "hdgst": ${hdgst:-false}, 00:35:53.694 "ddgst": ${ddgst:-false} 00:35:53.694 }, 00:35:53.694 "method": "bdev_nvme_attach_controller" 00:35:53.694 } 00:35:53.694 EOF 00:35:53.694 )") 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.694 { 00:35:53.694 "params": { 00:35:53.694 "name": "Nvme$subsystem", 00:35:53.694 "trtype": "$TEST_TRANSPORT", 00:35:53.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.694 "adrfam": "ipv4", 00:35:53.694 "trsvcid": "$NVMF_PORT", 00:35:53.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.694 "hdgst": ${hdgst:-false}, 00:35:53.694 "ddgst": ${ddgst:-false} 00:35:53.694 }, 00:35:53.694 "method": "bdev_nvme_attach_controller" 00:35:53.694 } 00:35:53.694 EOF 00:35:53.694 )") 00:35:53.694 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.695 { 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme$subsystem", 00:35:53.695 "trtype": "$TEST_TRANSPORT", 00:35:53.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "$NVMF_PORT", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.695 "hdgst": ${hdgst:-false}, 00:35:53.695 "ddgst": ${ddgst:-false} 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 } 00:35:53.695 EOF 00:35:53.695 )") 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.695 { 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme$subsystem", 00:35:53.695 "trtype": "$TEST_TRANSPORT", 00:35:53.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "$NVMF_PORT", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.695 "hdgst": ${hdgst:-false}, 00:35:53.695 "ddgst": ${ddgst:-false} 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 } 00:35:53.695 EOF 00:35:53.695 )") 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.695 { 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme$subsystem", 00:35:53.695 "trtype": "$TEST_TRANSPORT", 00:35:53.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "$NVMF_PORT", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.695 "hdgst": ${hdgst:-false}, 00:35:53.695 "ddgst": ${ddgst:-false} 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 } 00:35:53.695 EOF 00:35:53.695 )") 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.695 { 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme$subsystem", 00:35:53.695 "trtype": "$TEST_TRANSPORT", 00:35:53.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "$NVMF_PORT", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.695 "hdgst": ${hdgst:-false}, 00:35:53.695 "ddgst": ${ddgst:-false} 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 } 00:35:53.695 EOF 00:35:53.695 )") 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:35:53.695 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme1", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 },{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme2", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 },{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme3", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 },{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme4", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 },{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme5", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 },{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme6", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 },{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme7", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.695 "method": "bdev_nvme_attach_controller" 00:35:53.695 },{ 00:35:53.695 "params": { 00:35:53.695 "name": "Nvme8", 00:35:53.695 "trtype": "tcp", 00:35:53.695 "traddr": "10.0.0.2", 00:35:53.695 "adrfam": "ipv4", 00:35:53.695 "trsvcid": "4420", 00:35:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:35:53.695 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:35:53.695 "hdgst": false, 00:35:53.695 "ddgst": false 00:35:53.695 }, 00:35:53.696 "method": "bdev_nvme_attach_controller" 00:35:53.696 },{ 00:35:53.696 "params": { 00:35:53.696 "name": "Nvme9", 00:35:53.696 "trtype": "tcp", 00:35:53.696 "traddr": "10.0.0.2", 00:35:53.696 "adrfam": "ipv4", 00:35:53.696 "trsvcid": "4420", 00:35:53.696 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:35:53.696 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:35:53.696 "hdgst": false, 00:35:53.696 "ddgst": false 00:35:53.696 }, 00:35:53.696 "method": "bdev_nvme_attach_controller" 00:35:53.696 },{ 00:35:53.696 "params": { 00:35:53.696 "name": "Nvme10", 00:35:53.696 "trtype": "tcp", 00:35:53.696 "traddr": "10.0.0.2", 00:35:53.696 "adrfam": "ipv4", 00:35:53.696 "trsvcid": "4420", 00:35:53.696 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:35:53.696 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:35:53.696 "hdgst": false, 00:35:53.696 "ddgst": false 00:35:53.696 }, 00:35:53.696 "method": "bdev_nvme_attach_controller" 00:35:53.696 }' 00:35:53.696 [2024-07-22 23:16:29.994605] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:35:53.696 [2024-07-22 23:16:29.994775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987721 ] 00:35:53.954 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.954 [2024-07-22 23:16:30.107523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.954 [2024-07-22 23:16:30.217288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.856 Running I/O for 10 seconds... 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.114 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.373 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 987721 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 987721 ']' 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 987721 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 987721 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 987721' 00:35:56.632 killing process with pid 987721 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 987721 00:35:56.632 23:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 987721 00:35:56.891 Received shutdown signal, test time was about 1.072318 seconds 00:35:56.891 00:35:56.891 Latency(us) 00:35:56.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.891 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme1n1 : 1.04 185.38 11.59 0.00 0.00 339679.26 25631.86 321563.31 00:35:56.891 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme2n1 : 1.07 179.24 11.20 0.00 0.00 342963.14 26408.58 352632.23 00:35:56.891 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme3n1 : 1.03 185.93 11.62 0.00 0.00 322669.23 36700.16 330883.98 00:35:56.891 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme4n1 : 1.05 187.34 11.71 0.00 0.00 311385.91 3713.71 340204.66 00:35:56.891 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme5n1 : 1.07 179.46 11.22 0.00 0.00 318695.60 27962.03 324670.20 00:35:56.891 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme6n1 : 1.06 181.09 11.32 0.00 0.00 307276.36 25437.68 329330.54 00:35:56.891 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme7n1 : 1.05 182.55 11.41 0.00 0.00 296053.32 43108.12 307582.29 00:35:56.891 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme8n1 : 1.06 181.97 11.37 0.00 0.00 288892.90 35146.71 333990.87 00:35:56.891 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme9n1 : 1.02 125.71 7.86 0.00 0.00 400123.83 44661.57 352632.23 00:35:56.891 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:56.891 Verification LBA range: start 0x0 length 0x400 00:35:56.891 Nvme10n1 : 1.02 125.47 7.84 0.00 0.00 388796.11 23301.69 377487.36 00:35:56.891 =================================================================================================================== 00:35:56.891 Total : 1714.13 107.13 0.00 0.00 327132.24 3713.71 377487.36 00:35:57.149 23:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 987544 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:58.081 rmmod nvme_tcp 00:35:58.081 rmmod nvme_fabrics 00:35:58.081 rmmod nvme_keyring 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 987544 ']' 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 987544 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 987544 ']' 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 987544 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 987544 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 987544' 00:35:58.081 killing process with pid 987544 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 987544 00:35:58.081 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 987544 00:35:59.016 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:59.016 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:59.016 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:59.016 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:59.016 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:59.016 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.016 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.016 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:00.925 00:36:00.925 real 0m8.617s 00:36:00.925 user 0m27.053s 00:36:00.925 sys 0m1.976s 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.925 ************************************ 00:36:00.925 END TEST nvmf_shutdown_tc2 00:36:00.925 ************************************ 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:00.925 ************************************ 00:36:00.925 START TEST nvmf_shutdown_tc3 00:36:00.925 ************************************ 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.925 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:00.926 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:00.926 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:00.926 Found net devices under 0000:84:00.0: cvl_0_0 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:00.926 Found net devices under 0000:84:00.1: cvl_0_1 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:00.926 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:00.927 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:00.927 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:00.927 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:00.927 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:00.927 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:00.927 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:00.927 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:01.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:36:01.186 00:36:01.186 --- 10.0.0.2 ping statistics --- 00:36:01.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.186 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:36:01.186 00:36:01.186 --- 10.0.0.1 ping statistics --- 00:36:01.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.186 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=988719 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 988719 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 988719 ']' 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:01.186 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:01.186 [2024-07-22 23:16:37.425049] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:01.186 [2024-07-22 23:16:37.425154] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.186 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.444 [2024-07-22 23:16:37.509326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.444 [2024-07-22 23:16:37.621288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.444 [2024-07-22 23:16:37.621370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.444 [2024-07-22 23:16:37.621393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.444 [2024-07-22 23:16:37.621409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.444 [2024-07-22 23:16:37.621424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.444 [2024-07-22 23:16:37.621534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.444 [2024-07-22 23:16:37.621600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.444 [2024-07-22 23:16:37.621659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:01.444 [2024-07-22 23:16:37.621662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:01.703 [2024-07-22 23:16:37.926092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.703 23:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:01.703 Malloc1 00:36:01.961 [2024-07-22 23:16:38.029107] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.961 Malloc2 00:36:01.961 Malloc3 00:36:01.961 Malloc4 00:36:01.961 Malloc5 00:36:01.961 Malloc6 00:36:02.220 Malloc7 00:36:02.220 Malloc8 00:36:02.220 Malloc9 00:36:02.220 Malloc10 00:36:02.220 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.220 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:36:02.220 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:02.220 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=988899 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 988899 /var/tmp/bdevperf.sock 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 988899 ']' 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:02.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.479 { 00:36:02.479 "params": { 00:36:02.479 "name": "Nvme$subsystem", 00:36:02.479 "trtype": "$TEST_TRANSPORT", 00:36:02.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.479 "adrfam": "ipv4", 00:36:02.479 "trsvcid": "$NVMF_PORT", 00:36:02.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.479 "hdgst": ${hdgst:-false}, 00:36:02.479 "ddgst": ${ddgst:-false} 00:36:02.479 }, 00:36:02.479 "method": "bdev_nvme_attach_controller" 00:36:02.479 } 00:36:02.479 EOF 00:36:02.479 )") 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.479 { 00:36:02.479 "params": { 00:36:02.479 "name": "Nvme$subsystem", 00:36:02.479 "trtype": "$TEST_TRANSPORT", 00:36:02.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.479 "adrfam": "ipv4", 00:36:02.479 "trsvcid": "$NVMF_PORT", 00:36:02.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.479 "hdgst": ${hdgst:-false}, 00:36:02.479 "ddgst": ${ddgst:-false} 00:36:02.479 }, 00:36:02.479 "method": "bdev_nvme_attach_controller" 00:36:02.479 } 00:36:02.479 EOF 00:36:02.479 )") 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.479 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.479 { 00:36:02.479 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.480 { 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.480 { 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.480 { 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.480 { 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.480 { 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.480 { 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:02.480 { 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme$subsystem", 00:36:02.480 "trtype": "$TEST_TRANSPORT", 00:36:02.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "$NVMF_PORT", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.480 "hdgst": ${hdgst:-false}, 00:36:02.480 "ddgst": ${ddgst:-false} 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 } 00:36:02.480 EOF 00:36:02.480 )") 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:36:02.480 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme1", 00:36:02.480 "trtype": "tcp", 00:36:02.480 "traddr": "10.0.0.2", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "4420", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:02.480 "hdgst": false, 00:36:02.480 "ddgst": false 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 },{ 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme2", 00:36:02.480 "trtype": "tcp", 00:36:02.480 "traddr": "10.0.0.2", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "4420", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:02.480 "hdgst": false, 00:36:02.480 "ddgst": false 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 },{ 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme3", 00:36:02.480 "trtype": "tcp", 00:36:02.480 "traddr": "10.0.0.2", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "4420", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:36:02.480 "hdgst": false, 00:36:02.480 "ddgst": false 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 },{ 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme4", 00:36:02.480 "trtype": "tcp", 00:36:02.480 "traddr": "10.0.0.2", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "4420", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:36:02.480 "hdgst": false, 00:36:02.480 "ddgst": false 00:36:02.480 }, 00:36:02.480 "method": "bdev_nvme_attach_controller" 00:36:02.480 },{ 00:36:02.480 "params": { 00:36:02.480 "name": "Nvme5", 00:36:02.480 "trtype": "tcp", 00:36:02.480 "traddr": "10.0.0.2", 00:36:02.480 "adrfam": "ipv4", 00:36:02.480 "trsvcid": "4420", 00:36:02.480 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:36:02.480 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:36:02.480 "hdgst": false, 00:36:02.480 "ddgst": false 00:36:02.480 }, 00:36:02.481 "method": "bdev_nvme_attach_controller" 00:36:02.481 },{ 00:36:02.481 "params": { 00:36:02.481 "name": "Nvme6", 00:36:02.481 "trtype": "tcp", 00:36:02.481 "traddr": "10.0.0.2", 00:36:02.481 "adrfam": "ipv4", 00:36:02.481 "trsvcid": "4420", 00:36:02.481 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:36:02.481 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:36:02.481 "hdgst": false, 00:36:02.481 "ddgst": false 00:36:02.481 }, 00:36:02.481 "method": "bdev_nvme_attach_controller" 00:36:02.481 },{ 00:36:02.481 "params": { 00:36:02.481 "name": "Nvme7", 00:36:02.481 "trtype": "tcp", 00:36:02.481 "traddr": "10.0.0.2", 00:36:02.481 "adrfam": "ipv4", 00:36:02.481 "trsvcid": "4420", 00:36:02.481 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:36:02.481 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:36:02.481 "hdgst": false, 00:36:02.481 "ddgst": false 00:36:02.481 }, 00:36:02.481 "method": "bdev_nvme_attach_controller" 00:36:02.481 },{ 00:36:02.481 "params": { 00:36:02.481 "name": "Nvme8", 00:36:02.481 "trtype": "tcp", 00:36:02.481 "traddr": "10.0.0.2", 00:36:02.481 "adrfam": "ipv4", 00:36:02.481 "trsvcid": "4420", 00:36:02.481 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:36:02.481 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:36:02.481 "hdgst": false, 00:36:02.481 "ddgst": false 00:36:02.481 }, 00:36:02.481 "method": "bdev_nvme_attach_controller" 00:36:02.481 },{ 00:36:02.481 "params": { 00:36:02.481 "name": "Nvme9", 00:36:02.481 "trtype": "tcp", 00:36:02.481 "traddr": "10.0.0.2", 00:36:02.481 "adrfam": "ipv4", 00:36:02.481 "trsvcid": "4420", 00:36:02.481 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:36:02.481 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:36:02.481 "hdgst": false, 00:36:02.481 "ddgst": false 00:36:02.481 }, 00:36:02.481 "method": "bdev_nvme_attach_controller" 00:36:02.481 },{ 00:36:02.481 "params": { 00:36:02.481 "name": "Nvme10", 00:36:02.481 "trtype": "tcp", 00:36:02.481 "traddr": "10.0.0.2", 00:36:02.481 "adrfam": "ipv4", 00:36:02.481 "trsvcid": "4420", 00:36:02.481 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:36:02.481 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:36:02.481 "hdgst": false, 00:36:02.481 "ddgst": false 00:36:02.481 }, 00:36:02.481 "method": "bdev_nvme_attach_controller" 00:36:02.481 }' 00:36:02.481 [2024-07-22 23:16:38.579936] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:02.481 [2024-07-22 23:16:38.580035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988899 ] 00:36:02.481 EAL: No free 2048 kB hugepages reported on node 1 00:36:02.481 [2024-07-22 23:16:38.654131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.481 [2024-07-22 23:16:38.759530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.382 Running I/O for 10 seconds... 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:04.382 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.641 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:36:04.641 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:36:04.641 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:36:04.641 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:36:04.641 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:36:04.908 23:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 988719 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 988719 ']' 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 988719 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 988719 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 988719' 00:36:05.192 killing process with pid 988719 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 988719 00:36:05.192 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 988719 00:36:05.193 [2024-07-22 23:16:41.341802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.341961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.341983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.342977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b927c0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.193 [2024-07-22 23:16:41.345823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.345991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.346565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b952e0 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.194 [2024-07-22 23:16:41.350758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.350999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.351179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93140 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.353992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.354008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.195 [2024-07-22 23:16:41.354025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.354163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93620 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.355983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.356454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93ae0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.357986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.358004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.196 [2024-07-22 23:16:41.358022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.358928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93fa0 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.197 [2024-07-22 23:16:41.360852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.360990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.361705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94460 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.198 [2024-07-22 23:16:41.363581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.363995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.364149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94940 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.365123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94e00 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.370452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150ae70 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.370724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.370892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e9f40 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.370957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.370986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a800 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.371191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15095b0 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.371440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.199 [2024-07-22 23:16:41.371607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1501df0 is same with the state(5) to be set 00:36:05.199 [2024-07-22 23:16:41.371669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.199 [2024-07-22 23:16:41.371697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.371718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.371737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.371756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.371774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.371795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.371813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.371831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42610 is same with the state(5) to be set 00:36:05.200 [2024-07-22 23:16:41.371894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.371922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.371942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.371961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.371980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.371998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1014260 is same with the state(5) to be set 00:36:05.200 [2024-07-22 23:16:41.372122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b110 is same with the state(5) to be set 00:36:05.200 [2024-07-22 23:16:41.372358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547d0 is same with the state(5) to be set 00:36:05.200 [2024-07-22 23:16:41.372578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:05.200 [2024-07-22 23:16:41.372730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.372748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cc80 is same with the state(5) to be set 00:36:05.200 [2024-07-22 23:16:41.373010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.200 [2024-07-22 23:16:41.373760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.200 [2024-07-22 23:16:41.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.373806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.373828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.373847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.373868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.373886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.373907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.373926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.373947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.373965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.373986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.374974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.374993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.201 [2024-07-22 23:16:41.375335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.201 [2024-07-22 23:16:41.375358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.375676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.375806] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10184f0 was disconnected and freed. reset controller. 00:36:05.202 [2024-07-22 23:16:41.376494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.376967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.376986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.202 [2024-07-22 23:16:41.377557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.202 [2024-07-22 23:16:41.377576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.377599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.377618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.377639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.377658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.377680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.377698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.377720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.377739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.394979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.394997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.395959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.395981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.203 [2024-07-22 23:16:41.396000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.203 [2024-07-22 23:16:41.396169] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15ae560 was disconnected and freed. reset controller. 00:36:05.203 [2024-07-22 23:16:41.396506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150ae70 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e9f40 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101a800 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15095b0 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1501df0 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42610 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1014260 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144b110 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14547d0 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.396856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144cc80 (9): Bad file descriptor 00:36:05.204 [2024-07-22 23:16:41.400635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:36:05.204 [2024-07-22 23:16:41.400771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.400804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.400835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.400857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.400879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.400899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.400921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.400940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.400970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.400990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.401967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.401986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.402013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.402042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.402063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.402083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.402105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.402123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.402144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.402163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.402184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.204 [2024-07-22 23:16:41.402203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.204 [2024-07-22 23:16:41.402225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.402967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.402985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.205 [2024-07-22 23:16:41.403451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.205 [2024-07-22 23:16:41.403471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1017310 is same with the state(5) to be set 00:36:05.205 [2024-07-22 23:16:41.403580] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1017310 was disconnected and freed. reset controller. 00:36:05.205 [2024-07-22 23:16:41.404265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:36:05.205 [2024-07-22 23:16:41.404565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-07-22 23:16:41.404603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144cc80 with addr=10.0.0.2, port=4420 00:36:05.205 [2024-07-22 23:16:41.404626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cc80 is same with the state(5) to be set 00:36:05.205 [2024-07-22 23:16:41.407445] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:05.205 [2024-07-22 23:16:41.407493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:05.205 [2024-07-22 23:16:41.407730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-07-22 23:16:41.407768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101a800 with addr=10.0.0.2, port=4420 00:36:05.205 [2024-07-22 23:16:41.407791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a800 is same with the state(5) to be set 00:36:05.205 [2024-07-22 23:16:41.407822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144cc80 (9): Bad file descriptor 00:36:05.205 [2024-07-22 23:16:41.408053] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:05.205 [2024-07-22 23:16:41.408155] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:05.205 [2024-07-22 23:16:41.408250] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:05.205 [2024-07-22 23:16:41.408360] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:05.205 [2024-07-22 23:16:41.408455] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:05.205 [2024-07-22 23:16:41.408547] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:36:05.205 [2024-07-22 23:16:41.408855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-07-22 23:16:41.408894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1014260 with addr=10.0.0.2, port=4420 00:36:05.205 [2024-07-22 23:16:41.408916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1014260 is same with the state(5) to be set 00:36:05.206 [2024-07-22 23:16:41.408942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101a800 (9): Bad file descriptor 00:36:05.206 [2024-07-22 23:16:41.408966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:36:05.206 [2024-07-22 23:16:41.408983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:36:05.206 [2024-07-22 23:16:41.409004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:36:05.206 [2024-07-22 23:16:41.409472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.409973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.409994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.206 [2024-07-22 23:16:41.410758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.206 [2024-07-22 23:16:41.410780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.410800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.410821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.410842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.410864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.410883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.410905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.410924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.410946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.410964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.410986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.411966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.411987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.412007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.412030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.412048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.412069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.412088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.412110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.412128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.412148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9bd0 is same with the state(5) to be set 00:36:05.207 [2024-07-22 23:16:41.413897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.413929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.413958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.413979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.414002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.414021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.414044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.414070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.414093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.414113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.414134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.207 [2024-07-22 23:16:41.414153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.207 [2024-07-22 23:16:41.414175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.414970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.414988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.208 [2024-07-22 23:16:41.415808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.208 [2024-07-22 23:16:41.415827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.415848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.415866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.415886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.415904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.415925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.415943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.415964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.415983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.416569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.416588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14db110 is same with the state(5) to be set 00:36:05.209 [2024-07-22 23:16:41.418286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.418966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.418987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.419005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.419027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.419045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.419067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.419086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.419107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.419126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.209 [2024-07-22 23:16:41.419148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.209 [2024-07-22 23:16:41.419167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.419935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.419954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.434945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.434979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.435020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.435060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.435101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.435141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.435181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.435221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.210 [2024-07-22 23:16:41.435260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.210 [2024-07-22 23:16:41.435281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.435300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.435337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.435356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.435378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.435397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.435419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.435437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.435459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.435477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.435502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc5c0 is same with the state(5) to be set 00:36:05.211 [2024-07-22 23:16:41.437356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.437980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.438020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.438060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.438100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.438141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.438182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.438223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.211 [2024-07-22 23:16:41.438263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.211 [2024-07-22 23:16:41.438284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.438971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.438993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.212 [2024-07-22 23:16:41.439766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.212 [2024-07-22 23:16:41.439788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.439806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.439827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.439846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.439867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.439885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.439905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.439924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.439946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.439969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.439991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.440010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.440029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dda50 is same with the state(5) to be set 00:36:05.213 [2024-07-22 23:16:41.441738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.441800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.441820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.441843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.441863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.441886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.441904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.441949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.441972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.441990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.442972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.442991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.443012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.443030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.443051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.443069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.443090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.213 [2024-07-22 23:16:41.443109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.213 [2024-07-22 23:16:41.443130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.443962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.443983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.214 [2024-07-22 23:16:41.444420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.214 [2024-07-22 23:16:41.444441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15abe80 is same with the state(5) to be set 00:36:05.215 [2024-07-22 23:16:41.446156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.446971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.446989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.215 [2024-07-22 23:16:41.447796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.215 [2024-07-22 23:16:41.447817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.447837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.447858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.447877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.447899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.447920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.447942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.447961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.447982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.448001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.448022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.448042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.448064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.448083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.448109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.448129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.448150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.448168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.448189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.448208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.448236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.459977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.459998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.460017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.460038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.460070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.460093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.460112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.460132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.460151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.460172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.460190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.460211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.460229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.460251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.460269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.460289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ad1c0 is same with the state(5) to be set 00:36:05.216 [2024-07-22 23:16:41.463753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.463790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.463827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.463848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.463871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.463890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.463913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.216 [2024-07-22 23:16:41.463932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.216 [2024-07-22 23:16:41.463954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.463973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.463994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.464970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.464988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.217 [2024-07-22 23:16:41.465370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.217 [2024-07-22 23:16:41.465392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.465970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.465989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.218 [2024-07-22 23:16:41.466448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:05.218 [2024-07-22 23:16:41.466467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1426800 is same with the state(5) to be set 00:36:05.218 [2024-07-22 23:16:41.469488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.218 [2024-07-22 23:16:41.469526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:36:05.218 [2024-07-22 23:16:41.469555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:36:05.218 [2024-07-22 23:16:41.469592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:36:05.218 [2024-07-22 23:16:41.469674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1014260 (9): Bad file descriptor 00:36:05.218 [2024-07-22 23:16:41.469706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:36:05.218 [2024-07-22 23:16:41.469725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:36:05.218 [2024-07-22 23:16:41.469746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:36:05.218 [2024-07-22 23:16:41.469842] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.218 [2024-07-22 23:16:41.469875] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.218 [2024-07-22 23:16:41.469902] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.218 [2024-07-22 23:16:41.469934] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.218 [2024-07-22 23:16:41.469969] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.218 [2024-07-22 23:16:41.469999] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.218 [2024-07-22 23:16:41.470149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:36:05.218 [2024-07-22 23:16:41.470182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:36:05.218 [2024-07-22 23:16:41.470205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:36:05.489 task offset: 18176 on job bdev=Nvme2n1 fails 00:36:05.489 00:36:05.489 Latency(us) 00:36:05.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.489 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme1n1 ended in about 1.02 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme1n1 : 1.02 129.94 8.12 63.00 0.00 327207.43 42331.40 282727.16 00:36:05.489 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme2n1 ended in about 1.01 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme2n1 : 1.01 126.96 7.94 63.48 0.00 323238.12 28932.93 351078.78 00:36:05.489 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme3n1 ended in about 1.02 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme3n1 : 1.02 125.08 7.82 62.54 0.00 320023.01 36505.98 329330.54 00:36:05.489 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme4n1 ended in about 1.03 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme4n1 : 1.03 124.54 7.78 62.27 0.00 313094.95 24175.50 340204.66 00:36:05.489 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme5n1 ended in about 1.05 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme5n1 : 1.05 61.14 3.82 61.14 0.00 467020.61 46409.20 369720.13 00:36:05.489 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme6n1 ended in about 1.05 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme6n1 : 1.05 121.77 7.61 60.88 0.00 304444.56 23204.60 330883.98 00:36:05.489 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme7n1 ended in about 1.06 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme7n1 : 1.06 121.26 7.58 60.63 0.00 297658.85 53205.52 312242.63 00:36:05.489 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme8n1 ended in about 1.07 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme8n1 : 1.07 119.46 7.47 59.73 0.00 294803.28 43884.85 320009.86 00:36:05.489 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme9n1 ended in about 1.01 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme9n1 : 1.01 126.70 7.92 63.35 0.00 266342.15 26796.94 347971.89 00:36:05.489 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:05.489 Job: Nvme10n1 ended in about 1.08 seconds with error 00:36:05.489 Verification LBA range: start 0x0 length 0x400 00:36:05.489 Nvme10n1 : 1.08 59.39 3.71 59.39 0.00 421338.83 25437.68 397682.16 00:36:05.489 =================================================================================================================== 00:36:05.489 Total : 1116.26 69.77 616.43 0.00 325616.26 23204.60 397682.16 00:36:05.489 [2024-07-22 23:16:41.507865] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:05.489 [2024-07-22 23:16:41.507955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:36:05.489 [2024-07-22 23:16:41.507999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.489 [2024-07-22 23:16:41.508363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.489 [2024-07-22 23:16:41.508412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14547d0 with addr=10.0.0.2, port=4420 00:36:05.489 [2024-07-22 23:16:41.508439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14547d0 is same with the state(5) to be set 00:36:05.489 [2024-07-22 23:16:41.508637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.489 [2024-07-22 23:16:41.508674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144b110 with addr=10.0.0.2, port=4420 00:36:05.489 [2024-07-22 23:16:41.508696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b110 is same with the state(5) to be set 00:36:05.489 [2024-07-22 23:16:41.508909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.489 [2024-07-22 23:16:41.508946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15095b0 with addr=10.0.0.2, port=4420 00:36:05.489 [2024-07-22 23:16:41.508967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15095b0 is same with the state(5) to be set 00:36:05.489 [2024-07-22 23:16:41.508989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:05.489 [2024-07-22 23:16:41.509007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:05.489 [2024-07-22 23:16:41.509029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:05.489 [2024-07-22 23:16:41.511536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.489 [2024-07-22 23:16:41.511825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.489 [2024-07-22 23:16:41.511864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1501df0 with addr=10.0.0.2, port=4420 00:36:05.489 [2024-07-22 23:16:41.511886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1501df0 is same with the state(5) to be set 00:36:05.489 [2024-07-22 23:16:41.512082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.489 [2024-07-22 23:16:41.512117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf42610 with addr=10.0.0.2, port=4420 00:36:05.489 [2024-07-22 23:16:41.512139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42610 is same with the state(5) to be set 00:36:05.489 [2024-07-22 23:16:41.512338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.489 [2024-07-22 23:16:41.512372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150ae70 with addr=10.0.0.2, port=4420 00:36:05.489 [2024-07-22 23:16:41.512394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150ae70 is same with the state(5) to be set 00:36:05.489 [2024-07-22 23:16:41.512547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.489 [2024-07-22 23:16:41.512581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e9f40 with addr=10.0.0.2, port=4420 00:36:05.489 [2024-07-22 23:16:41.512601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e9f40 is same with the state(5) to be set 00:36:05.490 [2024-07-22 23:16:41.512634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14547d0 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.512663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144b110 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.512687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15095b0 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.512768] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.490 [2024-07-22 23:16:41.512822] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.490 [2024-07-22 23:16:41.512855] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.490 [2024-07-22 23:16:41.512881] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:05.490 [2024-07-22 23:16:41.512992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:36:05.490 [2024-07-22 23:16:41.513076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1501df0 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.513111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42610 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.513137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150ae70 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.513161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e9f40 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.513182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.513200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.513218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:36:05.490 [2024-07-22 23:16:41.513241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.513260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.513277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:36:05.490 [2024-07-22 23:16:41.513299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.513332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.513351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:36:05.490 [2024-07-22 23:16:41.513477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:36:05.490 [2024-07-22 23:16:41.513510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:05.490 [2024-07-22 23:16:41.513532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.513550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.513566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.513827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.490 [2024-07-22 23:16:41.513864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144cc80 with addr=10.0.0.2, port=4420 00:36:05.490 [2024-07-22 23:16:41.513886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cc80 is same with the state(5) to be set 00:36:05.490 [2024-07-22 23:16:41.513906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.513924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.513941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:36:05.490 [2024-07-22 23:16:41.513965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.513990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.514009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:36:05.490 [2024-07-22 23:16:41.514031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.514049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.514066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:36:05.490 [2024-07-22 23:16:41.514086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.514105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.514122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:36:05.490 [2024-07-22 23:16:41.514177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.514202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.514218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.514233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.514434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.490 [2024-07-22 23:16:41.514469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101a800 with addr=10.0.0.2, port=4420 00:36:05.490 [2024-07-22 23:16:41.514490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a800 is same with the state(5) to be set 00:36:05.490 [2024-07-22 23:16:41.514690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.490 [2024-07-22 23:16:41.514728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1014260 with addr=10.0.0.2, port=4420 00:36:05.490 [2024-07-22 23:16:41.514748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1014260 is same with the state(5) to be set 00:36:05.490 [2024-07-22 23:16:41.514773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144cc80 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.514834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101a800 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.514866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1014260 (9): Bad file descriptor 00:36:05.490 [2024-07-22 23:16:41.514889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.514906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.514930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:36:05.490 [2024-07-22 23:16:41.514978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.515002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.515021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.515039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:36:05.490 [2024-07-22 23:16:41.515061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:05.490 [2024-07-22 23:16:41.515079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:05.490 [2024-07-22 23:16:41.515105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:05.490 [2024-07-22 23:16:41.515162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.490 [2024-07-22 23:16:41.515186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:05.750 23:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:36:05.750 23:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 988899 00:36:07.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (988899) - No such process 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:07.133 rmmod nvme_tcp 00:36:07.133 rmmod nvme_fabrics 00:36:07.133 rmmod nvme_keyring 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:36:07.133 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.134 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:09.042 00:36:09.042 real 0m8.086s 00:36:09.042 user 0m20.651s 00:36:09.042 sys 0m1.753s 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:09.042 ************************************ 00:36:09.042 END TEST nvmf_shutdown_tc3 00:36:09.042 ************************************ 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:36:09.042 00:36:09.042 real 0m31.222s 00:36:09.042 user 1m28.424s 00:36:09.042 sys 0m8.411s 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:09.042 ************************************ 00:36:09.042 END TEST nvmf_shutdown 00:36:09.042 ************************************ 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:36:09.042 00:36:09.042 real 20m15.913s 00:36:09.042 user 55m41.480s 00:36:09.042 sys 4m38.035s 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.042 23:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:09.042 ************************************ 00:36:09.042 END TEST nvmf_target_extra 00:36:09.042 ************************************ 00:36:09.042 23:16:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:09.042 23:16:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:36:09.042 23:16:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:09.042 23:16:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.042 23:16:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.301 ************************************ 00:36:09.301 START TEST nvmf_host 00:36:09.301 ************************************ 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:36:09.301 * Looking for test storage... 00:36:09.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.301 ************************************ 00:36:09.301 START TEST nvmf_multicontroller 00:36:09.301 ************************************ 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:36:09.301 * Looking for test storage... 00:36:09.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.301 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:36:09.562 23:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:12.859 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:12.860 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:12.860 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:12.860 Found net devices under 0000:84:00.0: cvl_0_0 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:12.860 Found net devices under 0000:84:00.1: cvl_0_1 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:12.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:36:12.860 00:36:12.860 --- 10.0.0.2 ping statistics --- 00:36:12.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.860 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:36:12.860 23:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:12.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:36:12.860 00:36:12.860 --- 10.0.0.1 ping statistics --- 00:36:12.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.860 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=991493 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 991493 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 991493 ']' 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:12.860 23:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:12.860 [2024-07-22 23:16:49.095420] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:12.860 [2024-07-22 23:16:49.095521] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.860 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.119 [2024-07-22 23:16:49.172872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:13.120 [2024-07-22 23:16:49.283467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.120 [2024-07-22 23:16:49.283524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.120 [2024-07-22 23:16:49.283544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.120 [2024-07-22 23:16:49.283560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.120 [2024-07-22 23:16:49.283575] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.120 [2024-07-22 23:16:49.283679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:13.120 [2024-07-22 23:16:49.283741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:13.120 [2024-07-22 23:16:49.283744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.499 [2024-07-22 23:16:50.447039] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.499 Malloc0 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.499 [2024-07-22 23:16:50.521896] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.499 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.500 [2024-07-22 23:16:50.529764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.500 Malloc1 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=991766 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 991766 /var/tmp/bdevperf.sock 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 991766 ']' 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:14.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:14.500 23:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.069 NVMe0n1 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.069 1 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.069 request: 00:36:15.069 { 00:36:15.069 "name": "NVMe0", 00:36:15.069 "trtype": "tcp", 00:36:15.069 "traddr": "10.0.0.2", 00:36:15.069 "adrfam": "ipv4", 00:36:15.069 "trsvcid": "4420", 00:36:15.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.069 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:36:15.069 "hostaddr": "10.0.0.2", 00:36:15.069 "hostsvcid": "60000", 00:36:15.069 "prchk_reftag": false, 00:36:15.069 "prchk_guard": false, 00:36:15.069 "hdgst": false, 00:36:15.069 "ddgst": false, 00:36:15.069 "method": "bdev_nvme_attach_controller", 00:36:15.069 "req_id": 1 00:36:15.069 } 00:36:15.069 Got JSON-RPC error response 00:36:15.069 response: 00:36:15.069 { 00:36:15.069 "code": -114, 00:36:15.069 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:36:15.069 } 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:15.069 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.070 request: 00:36:15.070 { 00:36:15.070 "name": "NVMe0", 00:36:15.070 "trtype": "tcp", 00:36:15.070 "traddr": "10.0.0.2", 00:36:15.070 "adrfam": "ipv4", 00:36:15.070 "trsvcid": "4420", 00:36:15.070 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:15.070 "hostaddr": "10.0.0.2", 00:36:15.070 "hostsvcid": "60000", 00:36:15.070 "prchk_reftag": false, 00:36:15.070 "prchk_guard": false, 00:36:15.070 "hdgst": false, 00:36:15.070 "ddgst": false, 00:36:15.070 "method": "bdev_nvme_attach_controller", 00:36:15.070 "req_id": 1 00:36:15.070 } 00:36:15.070 Got JSON-RPC error response 00:36:15.070 response: 00:36:15.070 { 00:36:15.070 "code": -114, 00:36:15.070 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:36:15.070 } 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.070 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.330 request: 00:36:15.330 { 00:36:15.330 "name": "NVMe0", 00:36:15.330 "trtype": "tcp", 00:36:15.330 "traddr": "10.0.0.2", 00:36:15.330 "adrfam": "ipv4", 00:36:15.330 "trsvcid": "4420", 00:36:15.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.330 "hostaddr": "10.0.0.2", 00:36:15.330 "hostsvcid": "60000", 00:36:15.330 "prchk_reftag": false, 00:36:15.330 "prchk_guard": false, 00:36:15.330 "hdgst": false, 00:36:15.330 "ddgst": false, 00:36:15.330 "multipath": "disable", 00:36:15.330 "method": "bdev_nvme_attach_controller", 00:36:15.330 "req_id": 1 00:36:15.330 } 00:36:15.330 Got JSON-RPC error response 00:36:15.330 response: 00:36:15.330 { 00:36:15.330 "code": -114, 00:36:15.330 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:36:15.330 } 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.330 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.331 request: 00:36:15.331 { 00:36:15.331 "name": "NVMe0", 00:36:15.331 "trtype": "tcp", 00:36:15.331 "traddr": "10.0.0.2", 00:36:15.331 "adrfam": "ipv4", 00:36:15.331 "trsvcid": "4420", 00:36:15.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.331 "hostaddr": "10.0.0.2", 00:36:15.331 "hostsvcid": "60000", 00:36:15.331 "prchk_reftag": false, 00:36:15.331 "prchk_guard": false, 00:36:15.331 "hdgst": false, 00:36:15.331 "ddgst": false, 00:36:15.331 "multipath": "failover", 00:36:15.331 "method": "bdev_nvme_attach_controller", 00:36:15.331 "req_id": 1 00:36:15.331 } 00:36:15.331 Got JSON-RPC error response 00:36:15.331 response: 00:36:15.331 { 00:36:15.331 "code": -114, 00:36:15.331 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:36:15.331 } 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.331 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.331 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:36:15.331 23:16:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:16.715 0 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 991766 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 991766 ']' 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 991766 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 991766 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 991766' 00:36:16.715 killing process with pid 991766 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 991766 00:36:16.715 23:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 991766 00:36:16.974 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:36:16.975 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:36:16.975 [2024-07-22 23:16:50.680074] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:16.975 [2024-07-22 23:16:50.680269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991766 ] 00:36:16.975 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.975 [2024-07-22 23:16:50.797977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.975 [2024-07-22 23:16:50.908317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.975 [2024-07-22 23:16:51.587046] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 77b7f5c2-45e7-4dea-80e7-8a9e5dfe4e59 already exists 00:36:16.975 [2024-07-22 23:16:51.587101] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:77b7f5c2-45e7-4dea-80e7-8a9e5dfe4e59 alias for bdev NVMe1n1 00:36:16.975 [2024-07-22 23:16:51.587124] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:36:16.975 Running I/O for 1 seconds... 00:36:16.975 00:36:16.975 Latency(us) 00:36:16.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.975 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:36:16.975 NVMe0n1 : 1.00 14037.61 54.83 0.00 0.00 9101.17 5461.33 16311.18 00:36:16.975 =================================================================================================================== 00:36:16.975 Total : 14037.61 54.83 0.00 0.00 9101.17 5461.33 16311.18 00:36:16.975 Received shutdown signal, test time was about 1.000000 seconds 00:36:16.975 00:36:16.975 Latency(us) 00:36:16.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.975 =================================================================================================================== 00:36:16.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:16.975 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:16.975 rmmod nvme_tcp 00:36:16.975 rmmod nvme_fabrics 00:36:16.975 rmmod nvme_keyring 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 991493 ']' 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 991493 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 991493 ']' 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 991493 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:16.975 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 991493 00:36:17.235 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:17.235 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:17.235 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 991493' 00:36:17.235 killing process with pid 991493 00:36:17.235 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 991493 00:36:17.235 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 991493 00:36:17.494 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:17.494 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:17.494 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:17.494 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:17.494 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:17.495 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.495 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:17.495 23:16:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:20.035 00:36:20.035 real 0m10.190s 00:36:20.035 user 0m17.205s 00:36:20.035 sys 0m3.742s 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:20.035 ************************************ 00:36:20.035 END TEST nvmf_multicontroller 00:36:20.035 ************************************ 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.035 ************************************ 00:36:20.035 START TEST nvmf_aer 00:36:20.035 ************************************ 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:36:20.035 * Looking for test storage... 00:36:20.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:36:20.035 23:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:23.331 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:23.331 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:23.331 Found net devices under 0000:84:00.0: cvl_0_0 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:23.331 Found net devices under 0000:84:00.1: cvl_0_1 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:23.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:36:23.331 00:36:23.331 --- 10.0.0.2 ping statistics --- 00:36:23.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.331 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:23.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:36:23.331 00:36:23.331 --- 10.0.0.1 ping statistics --- 00:36:23.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.331 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:23.331 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=994121 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 994121 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 994121 ']' 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:23.332 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.332 [2024-07-22 23:16:59.384259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:23.332 [2024-07-22 23:16:59.384441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.332 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.332 [2024-07-22 23:16:59.540182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:23.591 [2024-07-22 23:16:59.694828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.591 [2024-07-22 23:16:59.694933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.591 [2024-07-22 23:16:59.694970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.591 [2024-07-22 23:16:59.695001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.591 [2024-07-22 23:16:59.695029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.591 [2024-07-22 23:16:59.695199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.591 [2024-07-22 23:16:59.695265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:23.591 [2024-07-22 23:16:59.695373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:23.591 [2024-07-22 23:16:59.695377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.591 [2024-07-22 23:16:59.880657] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.591 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.852 Malloc0 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.852 [2024-07-22 23:16:59.943197] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:23.852 [ 00:36:23.852 { 00:36:23.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:23.852 "subtype": "Discovery", 00:36:23.852 "listen_addresses": [], 00:36:23.852 "allow_any_host": true, 00:36:23.852 "hosts": [] 00:36:23.852 }, 00:36:23.852 { 00:36:23.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:23.852 "subtype": "NVMe", 00:36:23.852 "listen_addresses": [ 00:36:23.852 { 00:36:23.852 "trtype": "TCP", 00:36:23.852 "adrfam": "IPv4", 00:36:23.852 "traddr": "10.0.0.2", 00:36:23.852 "trsvcid": "4420" 00:36:23.852 } 00:36:23.852 ], 00:36:23.852 "allow_any_host": true, 00:36:23.852 "hosts": [], 00:36:23.852 "serial_number": "SPDK00000000000001", 00:36:23.852 "model_number": "SPDK bdev Controller", 00:36:23.852 "max_namespaces": 2, 00:36:23.852 "min_cntlid": 1, 00:36:23.852 "max_cntlid": 65519, 00:36:23.852 "namespaces": [ 00:36:23.852 { 00:36:23.852 "nsid": 1, 00:36:23.852 "bdev_name": "Malloc0", 00:36:23.852 "name": "Malloc0", 00:36:23.852 "nguid": "CE28F47A22444C62A004DCA080F1B455", 00:36:23.852 "uuid": "ce28f47a-2244-4c62-a004-dca080f1b455" 00:36:23.852 } 00:36:23.852 ] 00:36:23.852 } 00:36:23.852 ] 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=994152 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:36:23.852 23:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:36:23.852 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.852 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:23.852 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:36:23.852 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:36:23.852 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:24.112 Malloc1 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.112 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:24.112 [ 00:36:24.112 { 00:36:24.112 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:24.112 "subtype": "Discovery", 00:36:24.112 "listen_addresses": [], 00:36:24.112 "allow_any_host": true, 00:36:24.112 "hosts": [] 00:36:24.113 }, 00:36:24.113 { 00:36:24.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.113 "subtype": "NVMe", 00:36:24.113 "listen_addresses": [ 00:36:24.113 { 00:36:24.113 "trtype": "TCP", 00:36:24.113 "adrfam": "IPv4", 00:36:24.113 "traddr": "10.0.0.2", 00:36:24.113 "trsvcid": "4420" 00:36:24.113 } 00:36:24.113 ], 00:36:24.113 "allow_any_host": true, 00:36:24.113 "hosts": [], 00:36:24.113 "serial_number": "SPDK00000000000001", 00:36:24.113 "model_number": "SPDK bdev Controller", 00:36:24.113 "max_namespaces": 2, 00:36:24.113 "min_cntlid": 1, 00:36:24.113 "max_cntlid": 65519, 00:36:24.113 "namespaces": [ 00:36:24.113 { 00:36:24.113 "nsid": 1, 00:36:24.113 "bdev_name": "Malloc0", 00:36:24.113 "name": "Malloc0", 00:36:24.113 "nguid": "CE28F47A22444C62A004DCA080F1B455", 00:36:24.113 "uuid": "ce28f47a-2244-4c62-a004-dca080f1b455" 00:36:24.113 }, 00:36:24.113 { 00:36:24.113 "nsid": 2, 00:36:24.113 "bdev_name": "Malloc1", 00:36:24.113 "name": "Malloc1", 00:36:24.113 "nguid": "008078E409EB409687A2B020438FAB1A", 00:36:24.113 "uuid": "008078e4-09eb-4096-87a2-b020438fab1a" 00:36:24.113 } 00:36:24.113 ] 00:36:24.113 } 00:36:24.113 ] 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 994152 00:36:24.113 Asynchronous Event Request test 00:36:24.113 Attaching to 10.0.0.2 00:36:24.113 Attached to 10.0.0.2 00:36:24.113 Registering asynchronous event callbacks... 00:36:24.113 Starting namespace attribute notice tests for all controllers... 00:36:24.113 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:36:24.113 aer_cb - Changed Namespace 00:36:24.113 Cleaning up... 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.113 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:24.373 rmmod nvme_tcp 00:36:24.373 rmmod nvme_fabrics 00:36:24.373 rmmod nvme_keyring 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 994121 ']' 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 994121 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 994121 ']' 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 994121 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994121 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994121' 00:36:24.373 killing process with pid 994121 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 994121 00:36:24.373 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 994121 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.633 23:17:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:27.203 00:36:27.203 real 0m7.138s 00:36:27.203 user 0m5.679s 00:36:27.203 sys 0m3.198s 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:27.203 ************************************ 00:36:27.203 END TEST nvmf_aer 00:36:27.203 ************************************ 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:27.203 23:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.203 ************************************ 00:36:27.203 START TEST nvmf_async_init 00:36:27.203 ************************************ 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:36:27.203 * Looking for test storage... 00:36:27.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.203 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=20b4ee16691843c5babd6769099abdfb 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:36:27.204 23:17:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:30.498 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:30.498 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:30.498 Found net devices under 0000:84:00.0: cvl_0_0 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:30.498 Found net devices under 0000:84:00.1: cvl_0_1 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.498 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:30.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:36:30.499 00:36:30.499 --- 10.0.0.2 ping statistics --- 00:36:30.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.499 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:36:30.499 00:36:30.499 --- 10.0.0.1 ping statistics --- 00:36:30.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.499 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=996347 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 996347 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 996347 ']' 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:30.499 23:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:30.499 [2024-07-22 23:17:06.696710] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:30.499 [2024-07-22 23:17:06.696808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.499 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.499 [2024-07-22 23:17:06.791003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.758 [2024-07-22 23:17:06.901802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.758 [2024-07-22 23:17:06.901868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.758 [2024-07-22 23:17:06.901888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.758 [2024-07-22 23:17:06.901904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.758 [2024-07-22 23:17:06.901918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.758 [2024-07-22 23:17:06.901964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.758 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.758 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:36:30.758 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.758 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:30.758 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.017 [2024-07-22 23:17:07.078010] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.017 null0 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 20b4ee16691843c5babd6769099abdfb 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.017 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:31.018 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.018 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.018 [2024-07-22 23:17:07.118304] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.018 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.018 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:36:31.018 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.018 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.277 nvme0n1 00:36:31.277 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.277 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:31.277 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.277 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.277 [ 00:36:31.277 { 00:36:31.277 "name": "nvme0n1", 00:36:31.277 "aliases": [ 00:36:31.277 "20b4ee16-6918-43c5-babd-6769099abdfb" 00:36:31.277 ], 00:36:31.277 "product_name": "NVMe disk", 00:36:31.277 "block_size": 512, 00:36:31.277 "num_blocks": 2097152, 00:36:31.277 "uuid": "20b4ee16-6918-43c5-babd-6769099abdfb", 00:36:31.277 "assigned_rate_limits": { 00:36:31.277 "rw_ios_per_sec": 0, 00:36:31.277 "rw_mbytes_per_sec": 0, 00:36:31.277 "r_mbytes_per_sec": 0, 00:36:31.277 "w_mbytes_per_sec": 0 00:36:31.277 }, 00:36:31.277 "claimed": false, 00:36:31.277 "zoned": false, 00:36:31.277 "supported_io_types": { 00:36:31.277 "read": true, 00:36:31.277 "write": true, 00:36:31.277 "unmap": false, 00:36:31.277 "flush": true, 00:36:31.277 "reset": true, 00:36:31.277 "nvme_admin": true, 00:36:31.277 "nvme_io": true, 00:36:31.277 "nvme_io_md": false, 00:36:31.277 "write_zeroes": true, 00:36:31.277 "zcopy": false, 00:36:31.277 "get_zone_info": false, 00:36:31.277 "zone_management": false, 00:36:31.277 "zone_append": false, 00:36:31.277 "compare": true, 00:36:31.277 "compare_and_write": true, 00:36:31.277 "abort": true, 00:36:31.277 "seek_hole": false, 00:36:31.277 "seek_data": false, 00:36:31.277 "copy": true, 00:36:31.277 "nvme_iov_md": false 00:36:31.277 }, 00:36:31.277 "memory_domains": [ 00:36:31.277 { 00:36:31.277 "dma_device_id": "system", 00:36:31.277 "dma_device_type": 1 00:36:31.277 } 00:36:31.277 ], 00:36:31.277 "driver_specific": { 00:36:31.277 "nvme": [ 00:36:31.277 { 00:36:31.277 "trid": { 00:36:31.277 "trtype": "TCP", 00:36:31.277 "adrfam": "IPv4", 00:36:31.277 "traddr": "10.0.0.2", 00:36:31.277 "trsvcid": "4420", 00:36:31.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:31.277 }, 00:36:31.277 "ctrlr_data": { 00:36:31.277 "cntlid": 1, 00:36:31.277 "vendor_id": "0x8086", 00:36:31.277 "model_number": "SPDK bdev Controller", 00:36:31.277 "serial_number": "00000000000000000000", 00:36:31.277 "firmware_revision": "24.09", 00:36:31.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.277 "oacs": { 00:36:31.277 "security": 0, 00:36:31.277 "format": 0, 00:36:31.277 "firmware": 0, 00:36:31.277 "ns_manage": 0 00:36:31.277 }, 00:36:31.277 "multi_ctrlr": true, 00:36:31.277 "ana_reporting": false 00:36:31.277 }, 00:36:31.277 "vs": { 00:36:31.277 "nvme_version": "1.3" 00:36:31.277 }, 00:36:31.277 "ns_data": { 00:36:31.277 "id": 1, 00:36:31.277 "can_share": true 00:36:31.277 } 00:36:31.277 } 00:36:31.277 ], 00:36:31.277 "mp_policy": "active_passive" 00:36:31.277 } 00:36:31.277 } 00:36:31.277 ] 00:36:31.277 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.278 [2024-07-22 23:17:07.372156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:31.278 [2024-07-22 23:17:07.372273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cc260 (9): Bad file descriptor 00:36:31.278 [2024-07-22 23:17:07.514508] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.278 [ 00:36:31.278 { 00:36:31.278 "name": "nvme0n1", 00:36:31.278 "aliases": [ 00:36:31.278 "20b4ee16-6918-43c5-babd-6769099abdfb" 00:36:31.278 ], 00:36:31.278 "product_name": "NVMe disk", 00:36:31.278 "block_size": 512, 00:36:31.278 "num_blocks": 2097152, 00:36:31.278 "uuid": "20b4ee16-6918-43c5-babd-6769099abdfb", 00:36:31.278 "assigned_rate_limits": { 00:36:31.278 "rw_ios_per_sec": 0, 00:36:31.278 "rw_mbytes_per_sec": 0, 00:36:31.278 "r_mbytes_per_sec": 0, 00:36:31.278 "w_mbytes_per_sec": 0 00:36:31.278 }, 00:36:31.278 "claimed": false, 00:36:31.278 "zoned": false, 00:36:31.278 "supported_io_types": { 00:36:31.278 "read": true, 00:36:31.278 "write": true, 00:36:31.278 "unmap": false, 00:36:31.278 "flush": true, 00:36:31.278 "reset": true, 00:36:31.278 "nvme_admin": true, 00:36:31.278 "nvme_io": true, 00:36:31.278 "nvme_io_md": false, 00:36:31.278 "write_zeroes": true, 00:36:31.278 "zcopy": false, 00:36:31.278 "get_zone_info": false, 00:36:31.278 "zone_management": false, 00:36:31.278 "zone_append": false, 00:36:31.278 "compare": true, 00:36:31.278 "compare_and_write": true, 00:36:31.278 "abort": true, 00:36:31.278 "seek_hole": false, 00:36:31.278 "seek_data": false, 00:36:31.278 "copy": true, 00:36:31.278 "nvme_iov_md": false 00:36:31.278 }, 00:36:31.278 "memory_domains": [ 00:36:31.278 { 00:36:31.278 "dma_device_id": "system", 00:36:31.278 "dma_device_type": 1 00:36:31.278 } 00:36:31.278 ], 00:36:31.278 "driver_specific": { 00:36:31.278 "nvme": [ 00:36:31.278 { 00:36:31.278 "trid": { 00:36:31.278 "trtype": "TCP", 00:36:31.278 "adrfam": "IPv4", 00:36:31.278 "traddr": "10.0.0.2", 00:36:31.278 "trsvcid": "4420", 00:36:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:31.278 }, 00:36:31.278 "ctrlr_data": { 00:36:31.278 "cntlid": 2, 00:36:31.278 "vendor_id": "0x8086", 00:36:31.278 "model_number": "SPDK bdev Controller", 00:36:31.278 "serial_number": "00000000000000000000", 00:36:31.278 "firmware_revision": "24.09", 00:36:31.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.278 "oacs": { 00:36:31.278 "security": 0, 00:36:31.278 "format": 0, 00:36:31.278 "firmware": 0, 00:36:31.278 "ns_manage": 0 00:36:31.278 }, 00:36:31.278 "multi_ctrlr": true, 00:36:31.278 "ana_reporting": false 00:36:31.278 }, 00:36:31.278 "vs": { 00:36:31.278 "nvme_version": "1.3" 00:36:31.278 }, 00:36:31.278 "ns_data": { 00:36:31.278 "id": 1, 00:36:31.278 "can_share": true 00:36:31.278 } 00:36:31.278 } 00:36:31.278 ], 00:36:31.278 "mp_policy": "active_passive" 00:36:31.278 } 00:36:31.278 } 00:36:31.278 ] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.eCewXkUp9u 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.eCewXkUp9u 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.278 [2024-07-22 23:17:07.568893] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:31.278 [2024-07-22 23:17:07.569085] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eCewXkUp9u 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.278 [2024-07-22 23:17:07.576903] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eCewXkUp9u 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.278 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.278 [2024-07-22 23:17:07.584948] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:31.278 [2024-07-22 23:17:07.585035] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:36:31.538 nvme0n1 00:36:31.538 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.538 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:31.538 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.538 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.538 [ 00:36:31.538 { 00:36:31.538 "name": "nvme0n1", 00:36:31.538 "aliases": [ 00:36:31.538 "20b4ee16-6918-43c5-babd-6769099abdfb" 00:36:31.538 ], 00:36:31.538 "product_name": "NVMe disk", 00:36:31.539 "block_size": 512, 00:36:31.539 "num_blocks": 2097152, 00:36:31.539 "uuid": "20b4ee16-6918-43c5-babd-6769099abdfb", 00:36:31.539 "assigned_rate_limits": { 00:36:31.539 "rw_ios_per_sec": 0, 00:36:31.539 "rw_mbytes_per_sec": 0, 00:36:31.539 "r_mbytes_per_sec": 0, 00:36:31.539 "w_mbytes_per_sec": 0 00:36:31.539 }, 00:36:31.539 "claimed": false, 00:36:31.539 "zoned": false, 00:36:31.539 "supported_io_types": { 00:36:31.539 "read": true, 00:36:31.539 "write": true, 00:36:31.539 "unmap": false, 00:36:31.539 "flush": true, 00:36:31.539 "reset": true, 00:36:31.539 "nvme_admin": true, 00:36:31.539 "nvme_io": true, 00:36:31.539 "nvme_io_md": false, 00:36:31.539 "write_zeroes": true, 00:36:31.539 "zcopy": false, 00:36:31.539 "get_zone_info": false, 00:36:31.539 "zone_management": false, 00:36:31.539 "zone_append": false, 00:36:31.539 "compare": true, 00:36:31.539 "compare_and_write": true, 00:36:31.539 "abort": true, 00:36:31.539 "seek_hole": false, 00:36:31.539 "seek_data": false, 00:36:31.539 "copy": true, 00:36:31.539 "nvme_iov_md": false 00:36:31.539 }, 00:36:31.539 "memory_domains": [ 00:36:31.539 { 00:36:31.539 "dma_device_id": "system", 00:36:31.539 "dma_device_type": 1 00:36:31.539 } 00:36:31.539 ], 00:36:31.539 "driver_specific": { 00:36:31.539 "nvme": [ 00:36:31.539 { 00:36:31.539 "trid": { 00:36:31.539 "trtype": "TCP", 00:36:31.539 "adrfam": "IPv4", 00:36:31.539 "traddr": "10.0.0.2", 00:36:31.539 "trsvcid": "4421", 00:36:31.539 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:31.539 }, 00:36:31.539 "ctrlr_data": { 00:36:31.539 "cntlid": 3, 00:36:31.539 "vendor_id": "0x8086", 00:36:31.539 "model_number": "SPDK bdev Controller", 00:36:31.539 "serial_number": "00000000000000000000", 00:36:31.539 "firmware_revision": "24.09", 00:36:31.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.539 "oacs": { 00:36:31.539 "security": 0, 00:36:31.539 "format": 0, 00:36:31.539 "firmware": 0, 00:36:31.539 "ns_manage": 0 00:36:31.539 }, 00:36:31.539 "multi_ctrlr": true, 00:36:31.539 "ana_reporting": false 00:36:31.539 }, 00:36:31.539 "vs": { 00:36:31.539 "nvme_version": "1.3" 00:36:31.539 }, 00:36:31.539 "ns_data": { 00:36:31.539 "id": 1, 00:36:31.539 "can_share": true 00:36:31.539 } 00:36:31.539 } 00:36:31.539 ], 00:36:31.539 "mp_policy": "active_passive" 00:36:31.539 } 00:36:31.539 } 00:36:31.539 ] 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.eCewXkUp9u 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:31.539 rmmod nvme_tcp 00:36:31.539 rmmod nvme_fabrics 00:36:31.539 rmmod nvme_keyring 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 996347 ']' 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 996347 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 996347 ']' 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 996347 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 996347 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 996347' 00:36:31.539 killing process with pid 996347 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 996347 00:36:31.539 [2024-07-22 23:17:07.783497] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:36:31.539 [2024-07-22 23:17:07.783543] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:31.539 23:17:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 996347 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.799 23:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:34.340 00:36:34.340 real 0m7.085s 00:36:34.340 user 0m2.602s 00:36:34.340 sys 0m3.051s 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:34.340 ************************************ 00:36:34.340 END TEST nvmf_async_init 00:36:34.340 ************************************ 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.340 ************************************ 00:36:34.340 START TEST dma 00:36:34.340 ************************************ 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:36:34.340 * Looking for test storage... 00:36:34.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.340 23:17:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:36:34.341 00:36:34.341 real 0m0.100s 00:36:34.341 user 0m0.039s 00:36:34.341 sys 0m0.069s 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:36:34.341 ************************************ 00:36:34.341 END TEST dma 00:36:34.341 ************************************ 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.341 ************************************ 00:36:34.341 START TEST nvmf_identify 00:36:34.341 ************************************ 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:36:34.341 * Looking for test storage... 00:36:34.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:36:34.341 23:17:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:37.638 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:37.638 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:37.638 Found net devices under 0000:84:00.0: cvl_0_0 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:37.638 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:37.639 Found net devices under 0000:84:00.1: cvl_0_1 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:37.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:37.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:36:37.639 00:36:37.639 --- 10.0.0.2 ping statistics --- 00:36:37.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.639 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:37.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:37.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:36:37.639 00:36:37.639 --- 10.0.0.1 ping statistics --- 00:36:37.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.639 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=998496 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 998496 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 998496 ']' 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.639 [2024-07-22 23:17:13.456094] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:37.639 [2024-07-22 23:17:13.456194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.639 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.639 [2024-07-22 23:17:13.550506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:37.639 [2024-07-22 23:17:13.680390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.639 [2024-07-22 23:17:13.680484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.639 [2024-07-22 23:17:13.680521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.639 [2024-07-22 23:17:13.680555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.639 [2024-07-22 23:17:13.680583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.639 [2024-07-22 23:17:13.680720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.639 [2024-07-22 23:17:13.680793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:37.639 [2024-07-22 23:17:13.680863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.639 [2024-07-22 23:17:13.680860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.639 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 [2024-07-22 23:17:13.949931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.901 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.901 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:36:37.901 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:37.901 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:37.901 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.901 23:17:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 Malloc0 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 [2024-07-22 23:17:14.048683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.901 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.901 [ 00:36:37.901 { 00:36:37.901 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:37.901 "subtype": "Discovery", 00:36:37.901 "listen_addresses": [ 00:36:37.901 { 00:36:37.901 "trtype": "TCP", 00:36:37.901 "adrfam": "IPv4", 00:36:37.901 "traddr": "10.0.0.2", 00:36:37.901 "trsvcid": "4420" 00:36:37.901 } 00:36:37.901 ], 00:36:37.901 "allow_any_host": true, 00:36:37.901 "hosts": [] 00:36:37.901 }, 00:36:37.901 { 00:36:37.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:37.901 "subtype": "NVMe", 00:36:37.901 "listen_addresses": [ 00:36:37.901 { 00:36:37.901 "trtype": "TCP", 00:36:37.901 "adrfam": "IPv4", 00:36:37.901 "traddr": "10.0.0.2", 00:36:37.901 "trsvcid": "4420" 00:36:37.901 } 00:36:37.901 ], 00:36:37.901 "allow_any_host": true, 00:36:37.901 "hosts": [], 00:36:37.901 "serial_number": "SPDK00000000000001", 00:36:37.901 "model_number": "SPDK bdev Controller", 00:36:37.901 "max_namespaces": 32, 00:36:37.902 "min_cntlid": 1, 00:36:37.902 "max_cntlid": 65519, 00:36:37.902 "namespaces": [ 00:36:37.902 { 00:36:37.902 "nsid": 1, 00:36:37.902 "bdev_name": "Malloc0", 00:36:37.902 "name": "Malloc0", 00:36:37.902 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:36:37.902 "eui64": "ABCDEF0123456789", 00:36:37.902 "uuid": "e6eec411-97e1-4b88-9bcd-ca7c39ae4eda" 00:36:37.902 } 00:36:37.902 ] 00:36:37.902 } 00:36:37.902 ] 00:36:37.902 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.902 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:36:37.902 [2024-07-22 23:17:14.091189] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:37.902 [2024-07-22 23:17:14.091234] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998645 ] 00:36:37.902 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.902 [2024-07-22 23:17:14.134516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:36:37.902 [2024-07-22 23:17:14.134605] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:37.902 [2024-07-22 23:17:14.134620] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:37.902 [2024-07-22 23:17:14.134640] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:37.902 [2024-07-22 23:17:14.134658] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:37.902 [2024-07-22 23:17:14.135072] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:36:37.902 [2024-07-22 23:17:14.135144] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1650ae0 0 00:36:37.902 [2024-07-22 23:17:14.145343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:37.902 [2024-07-22 23:17:14.145372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:37.902 [2024-07-22 23:17:14.145384] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:37.902 [2024-07-22 23:17:14.145393] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:37.902 [2024-07-22 23:17:14.145464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.145481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.145492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.902 [2024-07-22 23:17:14.145515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:37.902 [2024-07-22 23:17:14.145551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.902 [2024-07-22 23:17:14.156333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.902 [2024-07-22 23:17:14.156357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.902 [2024-07-22 23:17:14.156368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.156378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.902 [2024-07-22 23:17:14.156404] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:37.902 [2024-07-22 23:17:14.156420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:36:37.902 [2024-07-22 23:17:14.156433] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:36:37.902 [2024-07-22 23:17:14.156467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.156480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.156489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.902 [2024-07-22 23:17:14.156505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.902 [2024-07-22 23:17:14.156544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.902 [2024-07-22 23:17:14.156747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.902 [2024-07-22 23:17:14.156764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.902 [2024-07-22 23:17:14.156774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.156783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.902 [2024-07-22 23:17:14.156801] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:36:37.902 [2024-07-22 23:17:14.156821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:36:37.902 [2024-07-22 23:17:14.156838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.156848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.156857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.902 [2024-07-22 23:17:14.156871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.902 [2024-07-22 23:17:14.156900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.902 [2024-07-22 23:17:14.157096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.902 [2024-07-22 23:17:14.157113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.902 [2024-07-22 23:17:14.157122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.157131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.902 [2024-07-22 23:17:14.157143] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:36:37.902 [2024-07-22 23:17:14.157162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:36:37.902 [2024-07-22 23:17:14.157179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.157189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.157198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.902 [2024-07-22 23:17:14.157212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.902 [2024-07-22 23:17:14.157240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.902 [2024-07-22 23:17:14.157435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.902 [2024-07-22 23:17:14.157453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.902 [2024-07-22 23:17:14.157463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.157472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.902 [2024-07-22 23:17:14.157485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:37.902 [2024-07-22 23:17:14.157507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.157519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.157528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.902 [2024-07-22 23:17:14.157542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.902 [2024-07-22 23:17:14.157571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.902 [2024-07-22 23:17:14.157772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.902 [2024-07-22 23:17:14.157797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.902 [2024-07-22 23:17:14.157808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.157818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.902 [2024-07-22 23:17:14.157829] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:36:37.902 [2024-07-22 23:17:14.157841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:36:37.902 [2024-07-22 23:17:14.157860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:37.902 [2024-07-22 23:17:14.157973] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:36:37.902 [2024-07-22 23:17:14.157985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:37.902 [2024-07-22 23:17:14.158002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.158013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.158022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.902 [2024-07-22 23:17:14.158036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.902 [2024-07-22 23:17:14.158065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.902 [2024-07-22 23:17:14.158242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.902 [2024-07-22 23:17:14.158262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.902 [2024-07-22 23:17:14.158272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.158281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.902 [2024-07-22 23:17:14.158292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:37.902 [2024-07-22 23:17:14.158328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.158343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.902 [2024-07-22 23:17:14.158352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.902 [2024-07-22 23:17:14.158366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.902 [2024-07-22 23:17:14.158396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.902 [2024-07-22 23:17:14.158600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.902 [2024-07-22 23:17:14.158617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.903 [2024-07-22 23:17:14.158626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.158636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.903 [2024-07-22 23:17:14.158646] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:37.903 [2024-07-22 23:17:14.158657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:36:37.903 [2024-07-22 23:17:14.158675] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:36:37.903 [2024-07-22 23:17:14.158694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:36:37.903 [2024-07-22 23:17:14.158719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.158730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.158745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.903 [2024-07-22 23:17:14.158774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.903 [2024-07-22 23:17:14.159000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:37.903 [2024-07-22 23:17:14.159020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:37.903 [2024-07-22 23:17:14.159030] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159039] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1650ae0): datao=0, datal=4096, cccid=0 00:36:37.903 [2024-07-22 23:17:14.159050] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a7240) on tqpair(0x1650ae0): expected_datao=0, payload_size=4096 00:36:37.903 [2024-07-22 23:17:14.159061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159076] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159087] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.903 [2024-07-22 23:17:14.159117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.903 [2024-07-22 23:17:14.159126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.903 [2024-07-22 23:17:14.159156] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:36:37.903 [2024-07-22 23:17:14.159169] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:36:37.903 [2024-07-22 23:17:14.159180] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:36:37.903 [2024-07-22 23:17:14.159191] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:36:37.903 [2024-07-22 23:17:14.159202] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:36:37.903 [2024-07-22 23:17:14.159213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:36:37.903 [2024-07-22 23:17:14.159233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:36:37.903 [2024-07-22 23:17:14.159250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.159284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:37.903 [2024-07-22 23:17:14.159321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.903 [2024-07-22 23:17:14.159520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.903 [2024-07-22 23:17:14.159540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.903 [2024-07-22 23:17:14.159550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:37.903 [2024-07-22 23:17:14.159575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.159613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.903 [2024-07-22 23:17:14.159627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.159656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.903 [2024-07-22 23:17:14.159669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.159698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.903 [2024-07-22 23:17:14.159711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.159740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.903 [2024-07-22 23:17:14.159752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:36:37.903 [2024-07-22 23:17:14.159778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:37.903 [2024-07-22 23:17:14.159796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.159806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.159820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.903 [2024-07-22 23:17:14.159850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7240, cid 0, qid 0 00:36:37.903 [2024-07-22 23:17:14.159865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a73c0, cid 1, qid 0 00:36:37.903 [2024-07-22 23:17:14.159876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7540, cid 2, qid 0 00:36:37.903 [2024-07-22 23:17:14.159886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a76c0, cid 3, qid 0 00:36:37.903 [2024-07-22 23:17:14.159896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7840, cid 4, qid 0 00:36:37.903 [2024-07-22 23:17:14.160139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.903 [2024-07-22 23:17:14.160159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.903 [2024-07-22 23:17:14.160168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.160178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7840) on tqpair=0x1650ae0 00:36:37.903 [2024-07-22 23:17:14.160189] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:36:37.903 [2024-07-22 23:17:14.160201] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:36:37.903 [2024-07-22 23:17:14.160225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.160238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.160252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.903 [2024-07-22 23:17:14.160286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7840, cid 4, qid 0 00:36:37.903 [2024-07-22 23:17:14.164328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:37.903 [2024-07-22 23:17:14.164350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:37.903 [2024-07-22 23:17:14.164360] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.164369] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1650ae0): datao=0, datal=4096, cccid=4 00:36:37.903 [2024-07-22 23:17:14.164379] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a7840) on tqpair(0x1650ae0): expected_datao=0, payload_size=4096 00:36:37.903 [2024-07-22 23:17:14.164390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.164404] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.164414] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.203324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.903 [2024-07-22 23:17:14.203348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.903 [2024-07-22 23:17:14.203358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.203368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7840) on tqpair=0x1650ae0 00:36:37.903 [2024-07-22 23:17:14.203394] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:36:37.903 [2024-07-22 23:17:14.203442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.203458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.203473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.903 [2024-07-22 23:17:14.203489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.203499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:37.903 [2024-07-22 23:17:14.203508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1650ae0) 00:36:37.903 [2024-07-22 23:17:14.203520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.903 [2024-07-22 23:17:14.203558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7840, cid 4, qid 0 00:36:37.903 [2024-07-22 23:17:14.203574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a79c0, cid 5, qid 0 00:36:37.903 [2024-07-22 23:17:14.203820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:37.903 [2024-07-22 23:17:14.203837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:37.903 [2024-07-22 23:17:14.203847] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:37.904 [2024-07-22 23:17:14.203856] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1650ae0): datao=0, datal=1024, cccid=4 00:36:37.904 [2024-07-22 23:17:14.203866] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a7840) on tqpair(0x1650ae0): expected_datao=0, payload_size=1024 00:36:37.904 [2024-07-22 23:17:14.203876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:37.904 [2024-07-22 23:17:14.203889] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:37.904 [2024-07-22 23:17:14.203899] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:37.904 [2024-07-22 23:17:14.203911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:37.904 [2024-07-22 23:17:14.203924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:37.904 [2024-07-22 23:17:14.203933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:37.904 [2024-07-22 23:17:14.203942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a79c0) on tqpair=0x1650ae0 00:36:38.168 [2024-07-22 23:17:14.244480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.168 [2024-07-22 23:17:14.244512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.168 [2024-07-22 23:17:14.244524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.168 [2024-07-22 23:17:14.244534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7840) on tqpair=0x1650ae0 00:36:38.168 [2024-07-22 23:17:14.244557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.168 [2024-07-22 23:17:14.244569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1650ae0) 00:36:38.168 [2024-07-22 23:17:14.244584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.168 [2024-07-22 23:17:14.244626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7840, cid 4, qid 0 00:36:38.168 [2024-07-22 23:17:14.244792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.168 [2024-07-22 23:17:14.244814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.168 [2024-07-22 23:17:14.244823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.244832] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1650ae0): datao=0, datal=3072, cccid=4 00:36:38.169 [2024-07-22 23:17:14.244842] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a7840) on tqpair(0x1650ae0): expected_datao=0, payload_size=3072 00:36:38.169 [2024-07-22 23:17:14.244853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.244866] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.244877] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.244894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.169 [2024-07-22 23:17:14.244907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.169 [2024-07-22 23:17:14.244917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.244926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7840) on tqpair=0x1650ae0 00:36:38.169 [2024-07-22 23:17:14.244946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.244957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1650ae0) 00:36:38.169 [2024-07-22 23:17:14.244972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.169 [2024-07-22 23:17:14.245011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a7840, cid 4, qid 0 00:36:38.169 [2024-07-22 23:17:14.245166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.169 [2024-07-22 23:17:14.245183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.169 [2024-07-22 23:17:14.245193] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.245202] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1650ae0): datao=0, datal=8, cccid=4 00:36:38.169 [2024-07-22 23:17:14.245212] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a7840) on tqpair(0x1650ae0): expected_datao=0, payload_size=8 00:36:38.169 [2024-07-22 23:17:14.245222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.245235] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.245245] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.285465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.169 [2024-07-22 23:17:14.285490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.169 [2024-07-22 23:17:14.285500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.169 [2024-07-22 23:17:14.285509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7840) on tqpair=0x1650ae0 00:36:38.169 ===================================================== 00:36:38.169 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:38.169 ===================================================== 00:36:38.169 Controller Capabilities/Features 00:36:38.169 ================================ 00:36:38.169 Vendor ID: 0000 00:36:38.169 Subsystem Vendor ID: 0000 00:36:38.169 Serial Number: .................... 00:36:38.169 Model Number: ........................................ 00:36:38.169 Firmware Version: 24.09 00:36:38.169 Recommended Arb Burst: 0 00:36:38.169 IEEE OUI Identifier: 00 00 00 00:36:38.169 Multi-path I/O 00:36:38.169 May have multiple subsystem ports: No 00:36:38.169 May have multiple controllers: No 00:36:38.169 Associated with SR-IOV VF: No 00:36:38.169 Max Data Transfer Size: 131072 00:36:38.169 Max Number of Namespaces: 0 00:36:38.169 Max Number of I/O Queues: 1024 00:36:38.169 NVMe Specification Version (VS): 1.3 00:36:38.169 NVMe Specification Version (Identify): 1.3 00:36:38.169 Maximum Queue Entries: 128 00:36:38.169 Contiguous Queues Required: Yes 00:36:38.169 Arbitration Mechanisms Supported 00:36:38.169 Weighted Round Robin: Not Supported 00:36:38.169 Vendor Specific: Not Supported 00:36:38.169 Reset Timeout: 15000 ms 00:36:38.169 Doorbell Stride: 4 bytes 00:36:38.169 NVM Subsystem Reset: Not Supported 00:36:38.169 Command Sets Supported 00:36:38.169 NVM Command Set: Supported 00:36:38.169 Boot Partition: Not Supported 00:36:38.169 Memory Page Size Minimum: 4096 bytes 00:36:38.169 Memory Page Size Maximum: 4096 bytes 00:36:38.169 Persistent Memory Region: Not Supported 00:36:38.169 Optional Asynchronous Events Supported 00:36:38.169 Namespace Attribute Notices: Not Supported 00:36:38.169 Firmware Activation Notices: Not Supported 00:36:38.169 ANA Change Notices: Not Supported 00:36:38.169 PLE Aggregate Log Change Notices: Not Supported 00:36:38.169 LBA Status Info Alert Notices: Not Supported 00:36:38.169 EGE Aggregate Log Change Notices: Not Supported 00:36:38.169 Normal NVM Subsystem Shutdown event: Not Supported 00:36:38.169 Zone Descriptor Change Notices: Not Supported 00:36:38.169 Discovery Log Change Notices: Supported 00:36:38.169 Controller Attributes 00:36:38.169 128-bit Host Identifier: Not Supported 00:36:38.169 Non-Operational Permissive Mode: Not Supported 00:36:38.169 NVM Sets: Not Supported 00:36:38.169 Read Recovery Levels: Not Supported 00:36:38.169 Endurance Groups: Not Supported 00:36:38.169 Predictable Latency Mode: Not Supported 00:36:38.169 Traffic Based Keep ALive: Not Supported 00:36:38.169 Namespace Granularity: Not Supported 00:36:38.169 SQ Associations: Not Supported 00:36:38.169 UUID List: Not Supported 00:36:38.169 Multi-Domain Subsystem: Not Supported 00:36:38.169 Fixed Capacity Management: Not Supported 00:36:38.169 Variable Capacity Management: Not Supported 00:36:38.169 Delete Endurance Group: Not Supported 00:36:38.169 Delete NVM Set: Not Supported 00:36:38.169 Extended LBA Formats Supported: Not Supported 00:36:38.169 Flexible Data Placement Supported: Not Supported 00:36:38.169 00:36:38.169 Controller Memory Buffer Support 00:36:38.169 ================================ 00:36:38.169 Supported: No 00:36:38.169 00:36:38.169 Persistent Memory Region Support 00:36:38.169 ================================ 00:36:38.169 Supported: No 00:36:38.169 00:36:38.169 Admin Command Set Attributes 00:36:38.169 ============================ 00:36:38.169 Security Send/Receive: Not Supported 00:36:38.169 Format NVM: Not Supported 00:36:38.169 Firmware Activate/Download: Not Supported 00:36:38.169 Namespace Management: Not Supported 00:36:38.169 Device Self-Test: Not Supported 00:36:38.169 Directives: Not Supported 00:36:38.169 NVMe-MI: Not Supported 00:36:38.169 Virtualization Management: Not Supported 00:36:38.169 Doorbell Buffer Config: Not Supported 00:36:38.169 Get LBA Status Capability: Not Supported 00:36:38.169 Command & Feature Lockdown Capability: Not Supported 00:36:38.169 Abort Command Limit: 1 00:36:38.169 Async Event Request Limit: 4 00:36:38.169 Number of Firmware Slots: N/A 00:36:38.169 Firmware Slot 1 Read-Only: N/A 00:36:38.169 Firmware Activation Without Reset: N/A 00:36:38.169 Multiple Update Detection Support: N/A 00:36:38.169 Firmware Update Granularity: No Information Provided 00:36:38.169 Per-Namespace SMART Log: No 00:36:38.169 Asymmetric Namespace Access Log Page: Not Supported 00:36:38.169 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:38.169 Command Effects Log Page: Not Supported 00:36:38.169 Get Log Page Extended Data: Supported 00:36:38.169 Telemetry Log Pages: Not Supported 00:36:38.169 Persistent Event Log Pages: Not Supported 00:36:38.169 Supported Log Pages Log Page: May Support 00:36:38.169 Commands Supported & Effects Log Page: Not Supported 00:36:38.169 Feature Identifiers & Effects Log Page:May Support 00:36:38.169 NVMe-MI Commands & Effects Log Page: May Support 00:36:38.169 Data Area 4 for Telemetry Log: Not Supported 00:36:38.169 Error Log Page Entries Supported: 128 00:36:38.169 Keep Alive: Not Supported 00:36:38.169 00:36:38.169 NVM Command Set Attributes 00:36:38.169 ========================== 00:36:38.169 Submission Queue Entry Size 00:36:38.169 Max: 1 00:36:38.169 Min: 1 00:36:38.169 Completion Queue Entry Size 00:36:38.169 Max: 1 00:36:38.169 Min: 1 00:36:38.169 Number of Namespaces: 0 00:36:38.169 Compare Command: Not Supported 00:36:38.169 Write Uncorrectable Command: Not Supported 00:36:38.169 Dataset Management Command: Not Supported 00:36:38.169 Write Zeroes Command: Not Supported 00:36:38.169 Set Features Save Field: Not Supported 00:36:38.169 Reservations: Not Supported 00:36:38.169 Timestamp: Not Supported 00:36:38.169 Copy: Not Supported 00:36:38.169 Volatile Write Cache: Not Present 00:36:38.169 Atomic Write Unit (Normal): 1 00:36:38.169 Atomic Write Unit (PFail): 1 00:36:38.169 Atomic Compare & Write Unit: 1 00:36:38.169 Fused Compare & Write: Supported 00:36:38.169 Scatter-Gather List 00:36:38.169 SGL Command Set: Supported 00:36:38.169 SGL Keyed: Supported 00:36:38.169 SGL Bit Bucket Descriptor: Not Supported 00:36:38.169 SGL Metadata Pointer: Not Supported 00:36:38.169 Oversized SGL: Not Supported 00:36:38.169 SGL Metadata Address: Not Supported 00:36:38.169 SGL Offset: Supported 00:36:38.169 Transport SGL Data Block: Not Supported 00:36:38.169 Replay Protected Memory Block: Not Supported 00:36:38.169 00:36:38.169 Firmware Slot Information 00:36:38.169 ========================= 00:36:38.169 Active slot: 0 00:36:38.169 00:36:38.169 00:36:38.169 Error Log 00:36:38.169 ========= 00:36:38.169 00:36:38.169 Active Namespaces 00:36:38.169 ================= 00:36:38.169 Discovery Log Page 00:36:38.169 ================== 00:36:38.170 Generation Counter: 2 00:36:38.170 Number of Records: 2 00:36:38.170 Record Format: 0 00:36:38.170 00:36:38.170 Discovery Log Entry 0 00:36:38.170 ---------------------- 00:36:38.170 Transport Type: 3 (TCP) 00:36:38.170 Address Family: 1 (IPv4) 00:36:38.170 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:38.170 Entry Flags: 00:36:38.170 Duplicate Returned Information: 1 00:36:38.170 Explicit Persistent Connection Support for Discovery: 1 00:36:38.170 Transport Requirements: 00:36:38.170 Secure Channel: Not Required 00:36:38.170 Port ID: 0 (0x0000) 00:36:38.170 Controller ID: 65535 (0xffff) 00:36:38.170 Admin Max SQ Size: 128 00:36:38.170 Transport Service Identifier: 4420 00:36:38.170 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:38.170 Transport Address: 10.0.0.2 00:36:38.170 Discovery Log Entry 1 00:36:38.170 ---------------------- 00:36:38.170 Transport Type: 3 (TCP) 00:36:38.170 Address Family: 1 (IPv4) 00:36:38.170 Subsystem Type: 2 (NVM Subsystem) 00:36:38.170 Entry Flags: 00:36:38.170 Duplicate Returned Information: 0 00:36:38.170 Explicit Persistent Connection Support for Discovery: 0 00:36:38.170 Transport Requirements: 00:36:38.170 Secure Channel: Not Required 00:36:38.170 Port ID: 0 (0x0000) 00:36:38.170 Controller ID: 65535 (0xffff) 00:36:38.170 Admin Max SQ Size: 128 00:36:38.170 Transport Service Identifier: 4420 00:36:38.170 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:36:38.170 Transport Address: 10.0.0.2 [2024-07-22 23:17:14.285664] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:36:38.170 [2024-07-22 23:17:14.285697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7240) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.285713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.170 [2024-07-22 23:17:14.285726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a73c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.285736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.170 [2024-07-22 23:17:14.285747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a7540) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.285757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.170 [2024-07-22 23:17:14.285768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a76c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.285779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.170 [2024-07-22 23:17:14.285803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.285816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.285825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1650ae0) 00:36:38.170 [2024-07-22 23:17:14.285840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.170 [2024-07-22 23:17:14.285873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a76c0, cid 3, qid 0 00:36:38.170 [2024-07-22 23:17:14.286029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.170 [2024-07-22 23:17:14.286049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.170 [2024-07-22 23:17:14.286059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a76c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.286084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1650ae0) 00:36:38.170 [2024-07-22 23:17:14.286118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.170 [2024-07-22 23:17:14.286155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a76c0, cid 3, qid 0 00:36:38.170 [2024-07-22 23:17:14.286322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.170 [2024-07-22 23:17:14.286343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.170 [2024-07-22 23:17:14.286352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a76c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.286373] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:36:38.170 [2024-07-22 23:17:14.286383] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:36:38.170 [2024-07-22 23:17:14.286406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1650ae0) 00:36:38.170 [2024-07-22 23:17:14.286442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.170 [2024-07-22 23:17:14.286471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a76c0, cid 3, qid 0 00:36:38.170 [2024-07-22 23:17:14.286615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.170 [2024-07-22 23:17:14.286639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.170 [2024-07-22 23:17:14.286650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a76c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.286683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1650ae0) 00:36:38.170 [2024-07-22 23:17:14.286720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.170 [2024-07-22 23:17:14.286748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a76c0, cid 3, qid 0 00:36:38.170 [2024-07-22 23:17:14.286964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.170 [2024-07-22 23:17:14.286980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.170 [2024-07-22 23:17:14.286990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.286999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a76c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.287022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.287034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.287044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1650ae0) 00:36:38.170 [2024-07-22 23:17:14.287058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.170 [2024-07-22 23:17:14.287085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a76c0, cid 3, qid 0 00:36:38.170 [2024-07-22 23:17:14.287252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.170 [2024-07-22 23:17:14.287271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.170 [2024-07-22 23:17:14.287281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.287290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a76c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.291323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.291343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.291352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1650ae0) 00:36:38.170 [2024-07-22 23:17:14.291367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.170 [2024-07-22 23:17:14.291397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a76c0, cid 3, qid 0 00:36:38.170 [2024-07-22 23:17:14.291591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.170 [2024-07-22 23:17:14.291611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.170 [2024-07-22 23:17:14.291620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.170 [2024-07-22 23:17:14.291630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a76c0) on tqpair=0x1650ae0 00:36:38.170 [2024-07-22 23:17:14.291648] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:36:38.170 00:36:38.170 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:36:38.170 [2024-07-22 23:17:14.348834] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:38.170 [2024-07-22 23:17:14.348943] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998649 ] 00:36:38.170 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.170 [2024-07-22 23:17:14.404820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:36:38.170 [2024-07-22 23:17:14.404888] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:38.170 [2024-07-22 23:17:14.404902] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:38.170 [2024-07-22 23:17:14.404921] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:38.170 [2024-07-22 23:17:14.404937] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:38.170 [2024-07-22 23:17:14.405240] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:36:38.170 [2024-07-22 23:17:14.405295] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1deeae0 0 00:36:38.171 [2024-07-22 23:17:14.422320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:38.171 [2024-07-22 23:17:14.422359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:38.171 [2024-07-22 23:17:14.422371] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:38.171 [2024-07-22 23:17:14.422379] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:38.171 [2024-07-22 23:17:14.422435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.422451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.422461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.422480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:38.171 [2024-07-22 23:17:14.422516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.430332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.430356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.430366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.430376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.171 [2024-07-22 23:17:14.430400] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:38.171 [2024-07-22 23:17:14.430415] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:36:38.171 [2024-07-22 23:17:14.430428] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:36:38.171 [2024-07-22 23:17:14.430456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.430468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.430478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.430493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.171 [2024-07-22 23:17:14.430525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.430723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.430744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.430753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.430763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.171 [2024-07-22 23:17:14.430779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:36:38.171 [2024-07-22 23:17:14.430804] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:36:38.171 [2024-07-22 23:17:14.430823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.430833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.430842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.430856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.171 [2024-07-22 23:17:14.430886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.431060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.431080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.431089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.171 [2024-07-22 23:17:14.431110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:36:38.171 [2024-07-22 23:17:14.431130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:36:38.171 [2024-07-22 23:17:14.431147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.431179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.171 [2024-07-22 23:17:14.431208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.431402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.431423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.431432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.171 [2024-07-22 23:17:14.431453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:38.171 [2024-07-22 23:17:14.431476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.431513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.171 [2024-07-22 23:17:14.431542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.431742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.431759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.431768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.171 [2024-07-22 23:17:14.431788] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:36:38.171 [2024-07-22 23:17:14.431800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:36:38.171 [2024-07-22 23:17:14.431818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:38.171 [2024-07-22 23:17:14.431937] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:36:38.171 [2024-07-22 23:17:14.431947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:38.171 [2024-07-22 23:17:14.431963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.431983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.431997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.171 [2024-07-22 23:17:14.432025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.432218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.432238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.432247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.432256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.171 [2024-07-22 23:17:14.432267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:38.171 [2024-07-22 23:17:14.432291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.432304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.432326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.432341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.171 [2024-07-22 23:17:14.432371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.432535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.432554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.432564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.432573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.171 [2024-07-22 23:17:14.432583] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:38.171 [2024-07-22 23:17:14.432595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:36:38.171 [2024-07-22 23:17:14.432614] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:36:38.171 [2024-07-22 23:17:14.432638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:36:38.171 [2024-07-22 23:17:14.432656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.432667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.171 [2024-07-22 23:17:14.432681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.171 [2024-07-22 23:17:14.432710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.171 [2024-07-22 23:17:14.432965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.171 [2024-07-22 23:17:14.432986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.171 [2024-07-22 23:17:14.432995] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.433009] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=4096, cccid=0 00:36:38.171 [2024-07-22 23:17:14.433020] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e45240) on tqpair(0x1deeae0): expected_datao=0, payload_size=4096 00:36:38.171 [2024-07-22 23:17:14.433030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.433044] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.433055] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.433072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.171 [2024-07-22 23:17:14.433085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.171 [2024-07-22 23:17:14.433094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.171 [2024-07-22 23:17:14.433104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.172 [2024-07-22 23:17:14.433124] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:36:38.172 [2024-07-22 23:17:14.433136] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:36:38.172 [2024-07-22 23:17:14.433146] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:36:38.172 [2024-07-22 23:17:14.433156] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:36:38.172 [2024-07-22 23:17:14.433166] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:36:38.172 [2024-07-22 23:17:14.433177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.433197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.433213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.433247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:38.172 [2024-07-22 23:17:14.433277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.172 [2024-07-22 23:17:14.433471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.172 [2024-07-22 23:17:14.433491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.172 [2024-07-22 23:17:14.433501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.172 [2024-07-22 23:17:14.433524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.433556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:38.172 [2024-07-22 23:17:14.433570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.433600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:38.172 [2024-07-22 23:17:14.433613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.433648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:38.172 [2024-07-22 23:17:14.433661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.433691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:38.172 [2024-07-22 23:17:14.433703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.433728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.433746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.433756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.433770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.172 [2024-07-22 23:17:14.433801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45240, cid 0, qid 0 00:36:38.172 [2024-07-22 23:17:14.433816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e453c0, cid 1, qid 0 00:36:38.172 [2024-07-22 23:17:14.433826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45540, cid 2, qid 0 00:36:38.172 [2024-07-22 23:17:14.433837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.172 [2024-07-22 23:17:14.433847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45840, cid 4, qid 0 00:36:38.172 [2024-07-22 23:17:14.437324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.172 [2024-07-22 23:17:14.437347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.172 [2024-07-22 23:17:14.437357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.437367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45840) on tqpair=0x1deeae0 00:36:38.172 [2024-07-22 23:17:14.437378] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:36:38.172 [2024-07-22 23:17:14.437390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.437410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.437426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.437440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.437451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.437460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.437474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:38.172 [2024-07-22 23:17:14.437505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45840, cid 4, qid 0 00:36:38.172 [2024-07-22 23:17:14.437699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.172 [2024-07-22 23:17:14.437719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.172 [2024-07-22 23:17:14.437729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.437743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45840) on tqpair=0x1deeae0 00:36:38.172 [2024-07-22 23:17:14.437835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.437861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.437881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.437892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.437906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.172 [2024-07-22 23:17:14.437936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45840, cid 4, qid 0 00:36:38.172 [2024-07-22 23:17:14.438119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.172 [2024-07-22 23:17:14.438139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.172 [2024-07-22 23:17:14.438148] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438157] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=4096, cccid=4 00:36:38.172 [2024-07-22 23:17:14.438168] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e45840) on tqpair(0x1deeae0): expected_datao=0, payload_size=4096 00:36:38.172 [2024-07-22 23:17:14.438178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438245] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438258] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.172 [2024-07-22 23:17:14.438416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.172 [2024-07-22 23:17:14.438425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45840) on tqpair=0x1deeae0 00:36:38.172 [2024-07-22 23:17:14.438453] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:36:38.172 [2024-07-22 23:17:14.438476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.438500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:36:38.172 [2024-07-22 23:17:14.438518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1deeae0) 00:36:38.172 [2024-07-22 23:17:14.438543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.172 [2024-07-22 23:17:14.438573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45840, cid 4, qid 0 00:36:38.172 [2024-07-22 23:17:14.438768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.172 [2024-07-22 23:17:14.438787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.172 [2024-07-22 23:17:14.438797] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438806] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=4096, cccid=4 00:36:38.172 [2024-07-22 23:17:14.438816] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e45840) on tqpair(0x1deeae0): expected_datao=0, payload_size=4096 00:36:38.172 [2024-07-22 23:17:14.438826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438850] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.438863] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.439008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.172 [2024-07-22 23:17:14.439025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.172 [2024-07-22 23:17:14.439035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.172 [2024-07-22 23:17:14.439045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45840) on tqpair=0x1deeae0 00:36:38.172 [2024-07-22 23:17:14.439074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1deeae0) 00:36:38.173 [2024-07-22 23:17:14.439144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.173 [2024-07-22 23:17:14.439173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45840, cid 4, qid 0 00:36:38.173 [2024-07-22 23:17:14.439361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.173 [2024-07-22 23:17:14.439381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.173 [2024-07-22 23:17:14.439391] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=4096, cccid=4 00:36:38.173 [2024-07-22 23:17:14.439409] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e45840) on tqpair(0x1deeae0): expected_datao=0, payload_size=4096 00:36:38.173 [2024-07-22 23:17:14.439419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439443] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439456] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.173 [2024-07-22 23:17:14.439510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.173 [2024-07-22 23:17:14.439519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45840) on tqpair=0x1deeae0 00:36:38.173 [2024-07-22 23:17:14.439545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439637] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:36:38.173 [2024-07-22 23:17:14.439648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:36:38.173 [2024-07-22 23:17:14.439659] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:36:38.173 [2024-07-22 23:17:14.439689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1deeae0) 00:36:38.173 [2024-07-22 23:17:14.439716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.173 [2024-07-22 23:17:14.439731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.439749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1deeae0) 00:36:38.173 [2024-07-22 23:17:14.439761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:38.173 [2024-07-22 23:17:14.439796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45840, cid 4, qid 0 00:36:38.173 [2024-07-22 23:17:14.439812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e459c0, cid 5, qid 0 00:36:38.173 [2024-07-22 23:17:14.439995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.173 [2024-07-22 23:17:14.440015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.173 [2024-07-22 23:17:14.440024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.173 [2024-07-22 23:17:14.440033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45840) on tqpair=0x1deeae0 00:36:38.173 [2024-07-22 23:17:14.440047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.173 [2024-07-22 23:17:14.440060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.173 [2024-07-22 23:17:14.440069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.440078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e459c0) on tqpair=0x1deeae0 00:36:38.174 [2024-07-22 23:17:14.440100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.440112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1deeae0) 00:36:38.174 [2024-07-22 23:17:14.440127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.174 [2024-07-22 23:17:14.440155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e459c0, cid 5, qid 0 00:36:38.174 [2024-07-22 23:17:14.440348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.174 [2024-07-22 23:17:14.440369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.174 [2024-07-22 23:17:14.440378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.440387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e459c0) on tqpair=0x1deeae0 00:36:38.174 [2024-07-22 23:17:14.440410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.440422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1deeae0) 00:36:38.174 [2024-07-22 23:17:14.440436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.174 [2024-07-22 23:17:14.440465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e459c0, cid 5, qid 0 00:36:38.174 [2024-07-22 23:17:14.440662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.174 [2024-07-22 23:17:14.440681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.174 [2024-07-22 23:17:14.440691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.440700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e459c0) on tqpair=0x1deeae0 00:36:38.174 [2024-07-22 23:17:14.440722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.440734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1deeae0) 00:36:38.174 [2024-07-22 23:17:14.440748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.174 [2024-07-22 23:17:14.440782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e459c0, cid 5, qid 0 00:36:38.174 [2024-07-22 23:17:14.440978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.174 [2024-07-22 23:17:14.440997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.174 [2024-07-22 23:17:14.441006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.441016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e459c0) on tqpair=0x1deeae0 00:36:38.174 [2024-07-22 23:17:14.441048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.441062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1deeae0) 00:36:38.174 [2024-07-22 23:17:14.441077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.174 [2024-07-22 23:17:14.441093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.441103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1deeae0) 00:36:38.174 [2024-07-22 23:17:14.441116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.174 [2024-07-22 23:17:14.441131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.441141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1deeae0) 00:36:38.174 [2024-07-22 23:17:14.441154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.174 [2024-07-22 23:17:14.441169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.441179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1deeae0) 00:36:38.174 [2024-07-22 23:17:14.441191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.174 [2024-07-22 23:17:14.441221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e459c0, cid 5, qid 0 00:36:38.174 [2024-07-22 23:17:14.441236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45840, cid 4, qid 0 00:36:38.174 [2024-07-22 23:17:14.441247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45b40, cid 6, qid 0 00:36:38.174 [2024-07-22 23:17:14.441257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45cc0, cid 7, qid 0 00:36:38.174 [2024-07-22 23:17:14.445333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.174 [2024-07-22 23:17:14.445355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.174 [2024-07-22 23:17:14.445365] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445374] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=8192, cccid=5 00:36:38.174 [2024-07-22 23:17:14.445385] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e459c0) on tqpair(0x1deeae0): expected_datao=0, payload_size=8192 00:36:38.174 [2024-07-22 23:17:14.445395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445409] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445420] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.174 [2024-07-22 23:17:14.445444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.174 [2024-07-22 23:17:14.445453] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445462] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=512, cccid=4 00:36:38.174 [2024-07-22 23:17:14.445472] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e45840) on tqpair(0x1deeae0): expected_datao=0, payload_size=512 00:36:38.174 [2024-07-22 23:17:14.445488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445501] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445511] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.174 [2024-07-22 23:17:14.445522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.174 [2024-07-22 23:17:14.445535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.175 [2024-07-22 23:17:14.445544] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445553] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=512, cccid=6 00:36:38.175 [2024-07-22 23:17:14.445563] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e45b40) on tqpair(0x1deeae0): expected_datao=0, payload_size=512 00:36:38.175 [2024-07-22 23:17:14.445573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445585] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445596] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:38.175 [2024-07-22 23:17:14.445620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:38.175 [2024-07-22 23:17:14.445629] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445638] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1deeae0): datao=0, datal=4096, cccid=7 00:36:38.175 [2024-07-22 23:17:14.445648] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e45cc0) on tqpair(0x1deeae0): expected_datao=0, payload_size=4096 00:36:38.175 [2024-07-22 23:17:14.445658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445671] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445681] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.175 [2024-07-22 23:17:14.445705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.175 [2024-07-22 23:17:14.445714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e459c0) on tqpair=0x1deeae0 00:36:38.175 [2024-07-22 23:17:14.445749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.175 [2024-07-22 23:17:14.445764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.175 [2024-07-22 23:17:14.445773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45840) on tqpair=0x1deeae0 00:36:38.175 [2024-07-22 23:17:14.445803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.175 [2024-07-22 23:17:14.445817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.175 [2024-07-22 23:17:14.445826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45b40) on tqpair=0x1deeae0 00:36:38.175 [2024-07-22 23:17:14.445850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.175 [2024-07-22 23:17:14.445863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.175 [2024-07-22 23:17:14.445872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.175 [2024-07-22 23:17:14.445881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45cc0) on tqpair=0x1deeae0 00:36:38.175 ===================================================== 00:36:38.175 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.175 ===================================================== 00:36:38.175 Controller Capabilities/Features 00:36:38.175 ================================ 00:36:38.175 Vendor ID: 8086 00:36:38.175 Subsystem Vendor ID: 8086 00:36:38.175 Serial Number: SPDK00000000000001 00:36:38.175 Model Number: SPDK bdev Controller 00:36:38.175 Firmware Version: 24.09 00:36:38.175 Recommended Arb Burst: 6 00:36:38.175 IEEE OUI Identifier: e4 d2 5c 00:36:38.175 Multi-path I/O 00:36:38.175 May have multiple subsystem ports: Yes 00:36:38.175 May have multiple controllers: Yes 00:36:38.175 Associated with SR-IOV VF: No 00:36:38.175 Max Data Transfer Size: 131072 00:36:38.175 Max Number of Namespaces: 32 00:36:38.175 Max Number of I/O Queues: 127 00:36:38.175 NVMe Specification Version (VS): 1.3 00:36:38.175 NVMe Specification Version (Identify): 1.3 00:36:38.175 Maximum Queue Entries: 128 00:36:38.175 Contiguous Queues Required: Yes 00:36:38.175 Arbitration Mechanisms Supported 00:36:38.175 Weighted Round Robin: Not Supported 00:36:38.175 Vendor Specific: Not Supported 00:36:38.175 Reset Timeout: 15000 ms 00:36:38.175 Doorbell Stride: 4 bytes 00:36:38.175 NVM Subsystem Reset: Not Supported 00:36:38.175 Command Sets Supported 00:36:38.175 NVM Command Set: Supported 00:36:38.175 Boot Partition: Not Supported 00:36:38.175 Memory Page Size Minimum: 4096 bytes 00:36:38.175 Memory Page Size Maximum: 4096 bytes 00:36:38.175 Persistent Memory Region: Not Supported 00:36:38.175 Optional Asynchronous Events Supported 00:36:38.175 Namespace Attribute Notices: Supported 00:36:38.175 Firmware Activation Notices: Not Supported 00:36:38.175 ANA Change Notices: Not Supported 00:36:38.175 PLE Aggregate Log Change Notices: Not Supported 00:36:38.175 LBA Status Info Alert Notices: Not Supported 00:36:38.175 EGE Aggregate Log Change Notices: Not Supported 00:36:38.175 Normal NVM Subsystem Shutdown event: Not Supported 00:36:38.175 Zone Descriptor Change Notices: Not Supported 00:36:38.175 Discovery Log Change Notices: Not Supported 00:36:38.175 Controller Attributes 00:36:38.175 128-bit Host Identifier: Supported 00:36:38.175 Non-Operational Permissive Mode: Not Supported 00:36:38.175 NVM Sets: Not Supported 00:36:38.175 Read Recovery Levels: Not Supported 00:36:38.175 Endurance Groups: Not Supported 00:36:38.175 Predictable Latency Mode: Not Supported 00:36:38.175 Traffic Based Keep ALive: Not Supported 00:36:38.175 Namespace Granularity: Not Supported 00:36:38.175 SQ Associations: Not Supported 00:36:38.175 UUID List: Not Supported 00:36:38.175 Multi-Domain Subsystem: Not Supported 00:36:38.175 Fixed Capacity Management: Not Supported 00:36:38.175 Variable Capacity Management: Not Supported 00:36:38.175 Delete Endurance Group: Not Supported 00:36:38.175 Delete NVM Set: Not Supported 00:36:38.175 Extended LBA Formats Supported: Not Supported 00:36:38.175 Flexible Data Placement Supported: Not Supported 00:36:38.175 00:36:38.175 Controller Memory Buffer Support 00:36:38.175 ================================ 00:36:38.175 Supported: No 00:36:38.175 00:36:38.175 Persistent Memory Region Support 00:36:38.175 ================================ 00:36:38.175 Supported: No 00:36:38.175 00:36:38.175 Admin Command Set Attributes 00:36:38.175 ============================ 00:36:38.175 Security Send/Receive: Not Supported 00:36:38.175 Format NVM: Not Supported 00:36:38.175 Firmware Activate/Download: Not Supported 00:36:38.175 Namespace Management: Not Supported 00:36:38.175 Device Self-Test: Not Supported 00:36:38.175 Directives: Not Supported 00:36:38.175 NVMe-MI: Not Supported 00:36:38.175 Virtualization Management: Not Supported 00:36:38.175 Doorbell Buffer Config: Not Supported 00:36:38.175 Get LBA Status Capability: Not Supported 00:36:38.175 Command & Feature Lockdown Capability: Not Supported 00:36:38.175 Abort Command Limit: 4 00:36:38.175 Async Event Request Limit: 4 00:36:38.175 Number of Firmware Slots: N/A 00:36:38.175 Firmware Slot 1 Read-Only: N/A 00:36:38.175 Firmware Activation Without Reset: N/A 00:36:38.175 Multiple Update Detection Support: N/A 00:36:38.175 Firmware Update Granularity: No Information Provided 00:36:38.175 Per-Namespace SMART Log: No 00:36:38.175 Asymmetric Namespace Access Log Page: Not Supported 00:36:38.175 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:36:38.175 Command Effects Log Page: Supported 00:36:38.175 Get Log Page Extended Data: Supported 00:36:38.175 Telemetry Log Pages: Not Supported 00:36:38.175 Persistent Event Log Pages: Not Supported 00:36:38.175 Supported Log Pages Log Page: May Support 00:36:38.175 Commands Supported & Effects Log Page: Not Supported 00:36:38.175 Feature Identifiers & Effects Log Page:May Support 00:36:38.175 NVMe-MI Commands & Effects Log Page: May Support 00:36:38.175 Data Area 4 for Telemetry Log: Not Supported 00:36:38.175 Error Log Page Entries Supported: 128 00:36:38.175 Keep Alive: Supported 00:36:38.175 Keep Alive Granularity: 10000 ms 00:36:38.175 00:36:38.175 NVM Command Set Attributes 00:36:38.175 ========================== 00:36:38.175 Submission Queue Entry Size 00:36:38.175 Max: 64 00:36:38.175 Min: 64 00:36:38.175 Completion Queue Entry Size 00:36:38.175 Max: 16 00:36:38.175 Min: 16 00:36:38.175 Number of Namespaces: 32 00:36:38.175 Compare Command: Supported 00:36:38.175 Write Uncorrectable Command: Not Supported 00:36:38.175 Dataset Management Command: Supported 00:36:38.175 Write Zeroes Command: Supported 00:36:38.175 Set Features Save Field: Not Supported 00:36:38.175 Reservations: Supported 00:36:38.175 Timestamp: Not Supported 00:36:38.175 Copy: Supported 00:36:38.175 Volatile Write Cache: Present 00:36:38.175 Atomic Write Unit (Normal): 1 00:36:38.175 Atomic Write Unit (PFail): 1 00:36:38.175 Atomic Compare & Write Unit: 1 00:36:38.175 Fused Compare & Write: Supported 00:36:38.175 Scatter-Gather List 00:36:38.175 SGL Command Set: Supported 00:36:38.175 SGL Keyed: Supported 00:36:38.175 SGL Bit Bucket Descriptor: Not Supported 00:36:38.175 SGL Metadata Pointer: Not Supported 00:36:38.175 Oversized SGL: Not Supported 00:36:38.175 SGL Metadata Address: Not Supported 00:36:38.175 SGL Offset: Supported 00:36:38.176 Transport SGL Data Block: Not Supported 00:36:38.176 Replay Protected Memory Block: Not Supported 00:36:38.176 00:36:38.176 Firmware Slot Information 00:36:38.176 ========================= 00:36:38.176 Active slot: 1 00:36:38.176 Slot 1 Firmware Revision: 24.09 00:36:38.176 00:36:38.176 00:36:38.176 Commands Supported and Effects 00:36:38.176 ============================== 00:36:38.176 Admin Commands 00:36:38.176 -------------- 00:36:38.176 Get Log Page (02h): Supported 00:36:38.176 Identify (06h): Supported 00:36:38.176 Abort (08h): Supported 00:36:38.176 Set Features (09h): Supported 00:36:38.176 Get Features (0Ah): Supported 00:36:38.176 Asynchronous Event Request (0Ch): Supported 00:36:38.176 Keep Alive (18h): Supported 00:36:38.176 I/O Commands 00:36:38.176 ------------ 00:36:38.176 Flush (00h): Supported LBA-Change 00:36:38.176 Write (01h): Supported LBA-Change 00:36:38.176 Read (02h): Supported 00:36:38.176 Compare (05h): Supported 00:36:38.176 Write Zeroes (08h): Supported LBA-Change 00:36:38.176 Dataset Management (09h): Supported LBA-Change 00:36:38.176 Copy (19h): Supported LBA-Change 00:36:38.176 00:36:38.176 Error Log 00:36:38.176 ========= 00:36:38.176 00:36:38.176 Arbitration 00:36:38.176 =========== 00:36:38.176 Arbitration Burst: 1 00:36:38.176 00:36:38.176 Power Management 00:36:38.176 ================ 00:36:38.176 Number of Power States: 1 00:36:38.176 Current Power State: Power State #0 00:36:38.176 Power State #0: 00:36:38.176 Max Power: 0.00 W 00:36:38.176 Non-Operational State: Operational 00:36:38.176 Entry Latency: Not Reported 00:36:38.176 Exit Latency: Not Reported 00:36:38.176 Relative Read Throughput: 0 00:36:38.176 Relative Read Latency: 0 00:36:38.176 Relative Write Throughput: 0 00:36:38.176 Relative Write Latency: 0 00:36:38.176 Idle Power: Not Reported 00:36:38.176 Active Power: Not Reported 00:36:38.176 Non-Operational Permissive Mode: Not Supported 00:36:38.176 00:36:38.176 Health Information 00:36:38.176 ================== 00:36:38.176 Critical Warnings: 00:36:38.176 Available Spare Space: OK 00:36:38.176 Temperature: OK 00:36:38.176 Device Reliability: OK 00:36:38.176 Read Only: No 00:36:38.176 Volatile Memory Backup: OK 00:36:38.176 Current Temperature: 0 Kelvin (-273 Celsius) 00:36:38.176 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:36:38.176 Available Spare: 0% 00:36:38.176 Available Spare Threshold: 0% 00:36:38.176 Life Percentage Used:[2024-07-22 23:17:14.446041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.446057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1deeae0) 00:36:38.176 [2024-07-22 23:17:14.446073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.176 [2024-07-22 23:17:14.446110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e45cc0, cid 7, qid 0 00:36:38.176 [2024-07-22 23:17:14.446320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.176 [2024-07-22 23:17:14.446341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.176 [2024-07-22 23:17:14.446351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.446360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45cc0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.446420] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:36:38.176 [2024-07-22 23:17:14.446447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45240) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.446461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.176 [2024-07-22 23:17:14.446473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e453c0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.446483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.176 [2024-07-22 23:17:14.446494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e45540) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.446504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.176 [2024-07-22 23:17:14.446515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.446525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:38.176 [2024-07-22 23:17:14.446542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.446552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.446561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.176 [2024-07-22 23:17:14.446576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.176 [2024-07-22 23:17:14.446606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.176 [2024-07-22 23:17:14.446780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.176 [2024-07-22 23:17:14.446799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.176 [2024-07-22 23:17:14.446809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.446818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.446834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.446844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.446853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.176 [2024-07-22 23:17:14.446868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.176 [2024-07-22 23:17:14.446904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.176 [2024-07-22 23:17:14.447132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.176 [2024-07-22 23:17:14.447149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.176 [2024-07-22 23:17:14.447158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.447178] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:36:38.176 [2024-07-22 23:17:14.447189] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:36:38.176 [2024-07-22 23:17:14.447216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.176 [2024-07-22 23:17:14.447253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.176 [2024-07-22 23:17:14.447281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.176 [2024-07-22 23:17:14.447483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.176 [2024-07-22 23:17:14.447502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.176 [2024-07-22 23:17:14.447511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.447543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.176 [2024-07-22 23:17:14.447579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.176 [2024-07-22 23:17:14.447608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.176 [2024-07-22 23:17:14.447803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.176 [2024-07-22 23:17:14.447820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.176 [2024-07-22 23:17:14.447829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.447861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.447882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.176 [2024-07-22 23:17:14.447896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.176 [2024-07-22 23:17:14.447923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.176 [2024-07-22 23:17:14.448087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.176 [2024-07-22 23:17:14.448107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.176 [2024-07-22 23:17:14.448116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.448125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.176 [2024-07-22 23:17:14.448148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.448161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.176 [2024-07-22 23:17:14.448170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.176 [2024-07-22 23:17:14.448184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.176 [2024-07-22 23:17:14.448212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.176 [2024-07-22 23:17:14.448355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.177 [2024-07-22 23:17:14.448375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.177 [2024-07-22 23:17:14.448385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.448394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.177 [2024-07-22 23:17:14.448417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.448435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.448445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.177 [2024-07-22 23:17:14.448459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.177 [2024-07-22 23:17:14.448488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.177 [2024-07-22 23:17:14.448680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.177 [2024-07-22 23:17:14.448699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.177 [2024-07-22 23:17:14.448708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.448718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.177 [2024-07-22 23:17:14.448740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.448753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.448762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.177 [2024-07-22 23:17:14.448776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.177 [2024-07-22 23:17:14.448805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.177 [2024-07-22 23:17:14.449000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.177 [2024-07-22 23:17:14.449019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.177 [2024-07-22 23:17:14.449028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.449038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.177 [2024-07-22 23:17:14.449060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.449073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.449082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.177 [2024-07-22 23:17:14.449096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.177 [2024-07-22 23:17:14.449124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.177 [2024-07-22 23:17:14.449282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.177 [2024-07-22 23:17:14.449301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.177 [2024-07-22 23:17:14.453321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.453337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.177 [2024-07-22 23:17:14.453364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.453377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.453386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1deeae0) 00:36:38.177 [2024-07-22 23:17:14.453401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.177 [2024-07-22 23:17:14.453431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e456c0, cid 3, qid 0 00:36:38.177 [2024-07-22 23:17:14.453648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:38.177 [2024-07-22 23:17:14.453664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:38.177 [2024-07-22 23:17:14.453673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:38.177 [2024-07-22 23:17:14.453683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e456c0) on tqpair=0x1deeae0 00:36:38.177 [2024-07-22 23:17:14.453701] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:36:38.177 0% 00:36:38.177 Data Units Read: 0 00:36:38.177 Data Units Written: 0 00:36:38.177 Host Read Commands: 0 00:36:38.177 Host Write Commands: 0 00:36:38.177 Controller Busy Time: 0 minutes 00:36:38.177 Power Cycles: 0 00:36:38.177 Power On Hours: 0 hours 00:36:38.177 Unsafe Shutdowns: 0 00:36:38.177 Unrecoverable Media Errors: 0 00:36:38.177 Lifetime Error Log Entries: 0 00:36:38.177 Warning Temperature Time: 0 minutes 00:36:38.177 Critical Temperature Time: 0 minutes 00:36:38.177 00:36:38.177 Number of Queues 00:36:38.177 ================ 00:36:38.177 Number of I/O Submission Queues: 127 00:36:38.177 Number of I/O Completion Queues: 127 00:36:38.177 00:36:38.177 Active Namespaces 00:36:38.177 ================= 00:36:38.177 Namespace ID:1 00:36:38.177 Error Recovery Timeout: Unlimited 00:36:38.177 Command Set Identifier: NVM (00h) 00:36:38.177 Deallocate: Supported 00:36:38.177 Deallocated/Unwritten Error: Not Supported 00:36:38.177 Deallocated Read Value: Unknown 00:36:38.177 Deallocate in Write Zeroes: Not Supported 00:36:38.177 Deallocated Guard Field: 0xFFFF 00:36:38.177 Flush: Supported 00:36:38.177 Reservation: Supported 00:36:38.177 Namespace Sharing Capabilities: Multiple Controllers 00:36:38.177 Size (in LBAs): 131072 (0GiB) 00:36:38.177 Capacity (in LBAs): 131072 (0GiB) 00:36:38.177 Utilization (in LBAs): 131072 (0GiB) 00:36:38.177 NGUID: ABCDEF0123456789ABCDEF0123456789 00:36:38.177 EUI64: ABCDEF0123456789 00:36:38.177 UUID: e6eec411-97e1-4b88-9bcd-ca7c39ae4eda 00:36:38.177 Thin Provisioning: Not Supported 00:36:38.177 Per-NS Atomic Units: Yes 00:36:38.177 Atomic Boundary Size (Normal): 0 00:36:38.177 Atomic Boundary Size (PFail): 0 00:36:38.177 Atomic Boundary Offset: 0 00:36:38.177 Maximum Single Source Range Length: 65535 00:36:38.177 Maximum Copy Length: 65535 00:36:38.177 Maximum Source Range Count: 1 00:36:38.177 NGUID/EUI64 Never Reused: No 00:36:38.177 Namespace Write Protected: No 00:36:38.177 Number of LBA Formats: 1 00:36:38.177 Current LBA Format: LBA Format #00 00:36:38.177 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:38.177 00:36:38.177 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:38.437 rmmod nvme_tcp 00:36:38.437 rmmod nvme_fabrics 00:36:38.437 rmmod nvme_keyring 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 998496 ']' 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 998496 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 998496 ']' 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 998496 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 998496 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 998496' 00:36:38.437 killing process with pid 998496 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 998496 00:36:38.437 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 998496 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.698 23:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.240 23:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:41.240 00:36:41.240 real 0m6.644s 00:36:41.240 user 0m5.892s 00:36:41.240 sys 0m2.707s 00:36:41.240 23:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:41.240 23:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:41.240 ************************************ 00:36:41.240 END TEST nvmf_identify 00:36:41.240 ************************************ 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.240 ************************************ 00:36:41.240 START TEST nvmf_perf 00:36:41.240 ************************************ 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:41.240 * Looking for test storage... 00:36:41.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.240 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:41.241 23:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:44.537 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:44.537 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:44.537 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:44.538 Found net devices under 0000:84:00.0: cvl_0_0 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:44.538 Found net devices under 0000:84:00.1: cvl_0_1 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:44.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:36:44.538 00:36:44.538 --- 10.0.0.2 ping statistics --- 00:36:44.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.538 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:44.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:36:44.538 00:36:44.538 --- 10.0.0.1 ping statistics --- 00:36:44.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.538 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1000719 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1000719 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1000719 ']' 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:44.538 23:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:44.538 [2024-07-22 23:17:20.721295] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:36:44.538 [2024-07-22 23:17:20.721510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.538 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.799 [2024-07-22 23:17:20.877619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:44.799 [2024-07-22 23:17:21.036558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.799 [2024-07-22 23:17:21.036685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.799 [2024-07-22 23:17:21.036722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:44.799 [2024-07-22 23:17:21.036751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:44.799 [2024-07-22 23:17:21.036778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.799 [2024-07-22 23:17:21.036942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.799 [2024-07-22 23:17:21.037015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.799 [2024-07-22 23:17:21.037103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:44.799 [2024-07-22 23:17:21.037108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.059 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:45.059 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:36:45.060 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:45.060 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:45.060 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:45.060 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:45.060 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:45.060 23:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:49.259 23:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:36:49.259 23:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:36:49.259 23:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:36:49.259 23:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:49.829 23:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:36:49.829 23:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:36:49.829 23:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:36:49.829 23:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:36:49.829 23:17:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:36:50.087 [2024-07-22 23:17:26.206539] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.087 23:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:50.345 23:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:36:50.345 23:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:50.604 23:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:36:50.604 23:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:50.894 23:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:51.163 [2024-07-22 23:17:27.396618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.163 23:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:51.422 23:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:36:51.422 23:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:36:51.422 23:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:36:51.422 23:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:36:52.802 Initializing NVMe Controllers 00:36:52.802 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:36:52.802 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:36:52.802 Initialization complete. Launching workers. 00:36:52.802 ======================================================== 00:36:52.802 Latency(us) 00:36:52.802 Device Information : IOPS MiB/s Average min max 00:36:52.802 PCIE (0000:82:00.0) NSID 1 from core 0: 61264.56 239.31 521.72 53.83 7372.81 00:36:52.802 ======================================================== 00:36:52.802 Total : 61264.56 239.31 521.72 53.83 7372.81 00:36:52.802 00:36:52.802 23:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:53.060 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.441 Initializing NVMe Controllers 00:36:54.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:54.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:54.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:54.441 Initialization complete. Launching workers. 00:36:54.441 ======================================================== 00:36:54.441 Latency(us) 00:36:54.441 Device Information : IOPS MiB/s Average min max 00:36:54.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 137.00 0.54 7455.63 195.55 45122.08 00:36:54.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15216.60 6143.56 47906.64 00:36:54.441 ======================================================== 00:36:54.441 Total : 203.00 0.79 9978.90 195.55 47906.64 00:36:54.441 00:36:54.441 23:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:54.441 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.380 Initializing NVMe Controllers 00:36:55.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:55.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:55.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:55.380 Initialization complete. Launching workers. 00:36:55.380 ======================================================== 00:36:55.380 Latency(us) 00:36:55.380 Device Information : IOPS MiB/s Average min max 00:36:55.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6125.00 23.93 5229.79 866.96 8584.02 00:36:55.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3835.00 14.98 8389.57 6563.31 16301.51 00:36:55.381 ======================================================== 00:36:55.381 Total : 9960.00 38.91 6446.43 866.96 16301.51 00:36:55.381 00:36:55.640 23:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:36:55.640 23:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:36:55.640 23:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:55.640 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.179 Initializing NVMe Controllers 00:36:58.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:58.179 Controller IO queue size 128, less than required. 00:36:58.179 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:58.179 Controller IO queue size 128, less than required. 00:36:58.179 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:58.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:58.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:58.179 Initialization complete. Launching workers. 00:36:58.179 ======================================================== 00:36:58.179 Latency(us) 00:36:58.180 Device Information : IOPS MiB/s Average min max 00:36:58.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1130.73 282.68 116065.26 91286.32 165353.77 00:36:58.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 543.39 135.85 237101.93 101573.45 375753.28 00:36:58.180 ======================================================== 00:36:58.180 Total : 1674.12 418.53 155351.62 91286.32 375753.28 00:36:58.180 00:36:58.180 23:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:36:58.180 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.439 No valid NVMe controllers or AIO or URING devices found 00:36:58.439 Initializing NVMe Controllers 00:36:58.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:58.439 Controller IO queue size 128, less than required. 00:36:58.439 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:58.439 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:36:58.439 Controller IO queue size 128, less than required. 00:36:58.439 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:58.439 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:36:58.439 WARNING: Some requested NVMe devices were skipped 00:36:58.439 23:17:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:36:58.439 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.972 Initializing NVMe Controllers 00:37:00.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:00.972 Controller IO queue size 128, less than required. 00:37:00.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:00.972 Controller IO queue size 128, less than required. 00:37:00.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:00.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:00.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:00.972 Initialization complete. Launching workers. 00:37:00.972 00:37:00.972 ==================== 00:37:00.972 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:37:00.972 TCP transport: 00:37:00.972 polls: 6305 00:37:00.972 idle_polls: 4193 00:37:00.972 sock_completions: 2112 00:37:00.972 nvme_completions: 4235 00:37:00.972 submitted_requests: 6354 00:37:00.972 queued_requests: 1 00:37:00.972 00:37:00.972 ==================== 00:37:00.972 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:37:00.972 TCP transport: 00:37:00.972 polls: 8043 00:37:00.972 idle_polls: 5791 00:37:00.972 sock_completions: 2252 00:37:00.972 nvme_completions: 4407 00:37:00.972 submitted_requests: 6610 00:37:00.972 queued_requests: 1 00:37:00.972 ======================================================== 00:37:00.972 Latency(us) 00:37:00.972 Device Information : IOPS MiB/s Average min max 00:37:00.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1057.70 264.42 124662.09 86807.86 208651.34 00:37:00.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1100.66 275.17 118272.63 64010.49 175090.95 00:37:00.972 ======================================================== 00:37:00.972 Total : 2158.36 539.59 121403.76 64010.49 208651.34 00:37:00.972 00:37:01.230 23:17:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:37:01.230 23:17:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:01.489 23:17:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:37:01.489 23:17:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:82:00.0 ']' 00:37:01.489 23:17:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:37:05.685 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=048b2f7e-e02c-455f-a5f0-402b52e089af 00:37:05.685 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 048b2f7e-e02c-455f-a5f0-402b52e089af 00:37:05.685 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=048b2f7e-e02c-455f-a5f0-402b52e089af 00:37:05.685 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:37:05.685 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:37:05.685 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:37:05.686 { 00:37:05.686 "uuid": "048b2f7e-e02c-455f-a5f0-402b52e089af", 00:37:05.686 "name": "lvs_0", 00:37:05.686 "base_bdev": "Nvme0n1", 00:37:05.686 "total_data_clusters": 238234, 00:37:05.686 "free_clusters": 238234, 00:37:05.686 "block_size": 512, 00:37:05.686 "cluster_size": 4194304 00:37:05.686 } 00:37:05.686 ]' 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="048b2f7e-e02c-455f-a5f0-402b52e089af") .free_clusters' 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="048b2f7e-e02c-455f-a5f0-402b52e089af") .cluster_size' 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:37:05.686 952936 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:37:05.686 23:17:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 048b2f7e-e02c-455f-a5f0-402b52e089af lbd_0 20480 00:37:06.254 23:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=158f0efe-165e-4223-8908-38c4cec4c9bf 00:37:06.254 23:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 158f0efe-165e-4223-8908-38c4cec4c9bf lvs_n_0 00:37:07.633 23:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=77e7042b-6a81-4973-915f-6fbb5575a653 00:37:07.633 23:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 77e7042b-6a81-4973-915f-6fbb5575a653 00:37:07.633 23:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=77e7042b-6a81-4973-915f-6fbb5575a653 00:37:07.633 23:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:37:07.633 23:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:37:07.633 23:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:37:07.633 23:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:07.892 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:37:07.892 { 00:37:07.892 "uuid": "048b2f7e-e02c-455f-a5f0-402b52e089af", 00:37:07.892 "name": "lvs_0", 00:37:07.892 "base_bdev": "Nvme0n1", 00:37:07.892 "total_data_clusters": 238234, 00:37:07.892 "free_clusters": 233114, 00:37:07.892 "block_size": 512, 00:37:07.892 "cluster_size": 4194304 00:37:07.892 }, 00:37:07.892 { 00:37:07.892 "uuid": "77e7042b-6a81-4973-915f-6fbb5575a653", 00:37:07.892 "name": "lvs_n_0", 00:37:07.892 "base_bdev": "158f0efe-165e-4223-8908-38c4cec4c9bf", 00:37:07.892 "total_data_clusters": 5114, 00:37:07.892 "free_clusters": 5114, 00:37:07.892 "block_size": 512, 00:37:07.892 "cluster_size": 4194304 00:37:07.892 } 00:37:07.892 ]' 00:37:07.892 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="77e7042b-6a81-4973-915f-6fbb5575a653") .free_clusters' 00:37:07.893 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:37:07.893 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="77e7042b-6a81-4973-915f-6fbb5575a653") .cluster_size' 00:37:08.153 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:37:08.153 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:37:08.153 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:37:08.153 20456 00:37:08.153 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:37:08.153 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77e7042b-6a81-4973-915f-6fbb5575a653 lbd_nest_0 20456 00:37:08.735 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e4663895-6317-42a7-b1bf-b5e6f964648f 00:37:08.735 23:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:08.994 23:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:37:08.995 23:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e4663895-6317-42a7-b1bf-b5e6f964648f 00:37:09.565 23:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.134 23:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:37:10.134 23:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:37:10.134 23:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:37:10.134 23:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:10.134 23:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:10.394 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.637 Initializing NVMe Controllers 00:37:22.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:22.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:22.637 Initialization complete. Launching workers. 00:37:22.637 ======================================================== 00:37:22.637 Latency(us) 00:37:22.637 Device Information : IOPS MiB/s Average min max 00:37:22.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.39 0.02 20314.48 243.22 47148.61 00:37:22.637 ======================================================== 00:37:22.637 Total : 49.39 0.02 20314.48 243.22 47148.61 00:37:22.637 00:37:22.637 23:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:22.637 23:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:22.637 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.635 Initializing NVMe Controllers 00:37:32.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:32.635 Initialization complete. Launching workers. 00:37:32.635 ======================================================== 00:37:32.635 Latency(us) 00:37:32.635 Device Information : IOPS MiB/s Average min max 00:37:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.67 9.83 12720.43 4076.44 47888.98 00:37:32.635 ======================================================== 00:37:32.635 Total : 78.67 9.83 12720.43 4076.44 47888.98 00:37:32.635 00:37:32.635 23:18:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:37:32.635 23:18:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:32.635 23:18:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:32.635 EAL: No free 2048 kB hugepages reported on node 1 00:37:42.619 Initializing NVMe Controllers 00:37:42.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:42.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:42.619 Initialization complete. Launching workers. 00:37:42.619 ======================================================== 00:37:42.619 Latency(us) 00:37:42.619 Device Information : IOPS MiB/s Average min max 00:37:42.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4921.24 2.40 6510.13 610.00 46305.08 00:37:42.619 ======================================================== 00:37:42.619 Total : 4921.24 2.40 6510.13 610.00 46305.08 00:37:42.619 00:37:42.619 23:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:42.619 23:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:42.619 EAL: No free 2048 kB hugepages reported on node 1 00:37:52.621 Initializing NVMe Controllers 00:37:52.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:52.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:52.621 Initialization complete. Launching workers. 00:37:52.621 ======================================================== 00:37:52.621 Latency(us) 00:37:52.621 Device Information : IOPS MiB/s Average min max 00:37:52.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2802.82 350.35 11422.57 844.78 27012.33 00:37:52.621 ======================================================== 00:37:52.621 Total : 2802.82 350.35 11422.57 844.78 27012.33 00:37:52.621 00:37:52.621 23:18:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:37:52.621 23:18:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:37:52.621 23:18:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:52.621 EAL: No free 2048 kB hugepages reported on node 1 00:38:02.611 Initializing NVMe Controllers 00:38:02.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:02.611 Controller IO queue size 128, less than required. 00:38:02.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:02.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:02.611 Initialization complete. Launching workers. 00:38:02.611 ======================================================== 00:38:02.611 Latency(us) 00:38:02.611 Device Information : IOPS MiB/s Average min max 00:38:02.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8508.73 4.15 15046.15 1893.79 32290.24 00:38:02.611 ======================================================== 00:38:02.611 Total : 8508.73 4.15 15046.15 1893.79 32290.24 00:38:02.611 00:38:02.611 23:18:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:38:02.611 23:18:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:02.611 EAL: No free 2048 kB hugepages reported on node 1 00:38:12.609 Initializing NVMe Controllers 00:38:12.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:12.609 Controller IO queue size 128, less than required. 00:38:12.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:12.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:12.609 Initialization complete. Launching workers. 00:38:12.609 ======================================================== 00:38:12.609 Latency(us) 00:38:12.609 Device Information : IOPS MiB/s Average min max 00:38:12.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1139.72 142.46 112915.81 15622.09 239349.31 00:38:12.609 ======================================================== 00:38:12.609 Total : 1139.72 142.46 112915.81 15622.09 239349.31 00:38:12.609 00:38:12.874 23:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:13.443 23:18:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4663895-6317-42a7-b1bf-b5e6f964648f 00:38:14.382 23:18:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:38:14.951 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 158f0efe-165e-4223-8908-38c4cec4c9bf 00:38:15.517 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:15.777 23:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:15.777 rmmod nvme_tcp 00:38:15.777 rmmod nvme_fabrics 00:38:15.777 rmmod nvme_keyring 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1000719 ']' 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1000719 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1000719 ']' 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1000719 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1000719 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1000719' 00:38:15.777 killing process with pid 1000719 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1000719 00:38:15.777 23:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1000719 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:17.684 23:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.606 23:18:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:19.606 00:38:19.606 real 1m38.824s 00:38:19.606 user 6m4.918s 00:38:19.606 sys 0m19.509s 00:38:19.606 23:18:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:19.606 23:18:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:19.606 ************************************ 00:38:19.606 END TEST nvmf_perf 00:38:19.606 ************************************ 00:38:19.866 23:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:38:19.866 23:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:38:19.866 23:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:19.866 23:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:19.866 23:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.866 ************************************ 00:38:19.866 START TEST nvmf_fio_host 00:38:19.866 ************************************ 00:38:19.866 23:18:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:38:19.866 * Looking for test storage... 00:38:19.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.866 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:38:19.867 23:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:23.160 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.160 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:23.161 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:23.161 Found net devices under 0000:84:00.0: cvl_0_0 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:23.161 Found net devices under 0000:84:00.1: cvl_0_1 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:23.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:38:23.161 00:38:23.161 --- 10.0.0.2 ping statistics --- 00:38:23.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.161 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:23.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:38:23.161 00:38:23.161 --- 10.0.0.1 ping statistics --- 00:38:23.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.161 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1013533 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1013533 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1013533 ']' 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:23.161 23:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.422 [2024-07-22 23:18:59.562928] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:38:23.422 [2024-07-22 23:18:59.563126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.422 EAL: No free 2048 kB hugepages reported on node 1 00:38:23.422 [2024-07-22 23:18:59.720676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:23.681 [2024-07-22 23:18:59.878206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.681 [2024-07-22 23:18:59.878278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.681 [2024-07-22 23:18:59.878299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.681 [2024-07-22 23:18:59.878327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.681 [2024-07-22 23:18:59.878343] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.681 [2024-07-22 23:18:59.878733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.681 [2024-07-22 23:18:59.878798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:23.681 [2024-07-22 23:18:59.878873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:23.681 [2024-07-22 23:18:59.878877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.942 23:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:23.942 23:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:38:23.942 23:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:24.511 [2024-07-22 23:19:00.668818] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.511 23:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:38:24.511 23:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:24.511 23:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.511 23:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:38:25.078 Malloc1 00:38:25.078 23:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:26.017 23:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:26.017 23:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:26.277 [2024-07-22 23:19:02.541927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.277 23:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:27.216 23:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:27.216 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:27.216 fio-3.35 00:38:27.216 Starting 1 thread 00:38:27.216 EAL: No free 2048 kB hugepages reported on node 1 00:38:29.756 00:38:29.756 test: (groupid=0, jobs=1): err= 0: pid=1014103: Mon Jul 22 23:19:05 2024 00:38:29.756 read: IOPS=6382, BW=24.9MiB/s (26.1MB/s)(51.1MiB/2049msec) 00:38:29.756 slat (usec): min=2, max=268, avg= 6.60, stdev= 3.70 00:38:29.756 clat (usec): min=3029, max=57254, avg=10551.47, stdev=3184.20 00:38:29.756 lat (usec): min=3061, max=57258, avg=10558.07, stdev=3184.12 00:38:29.757 clat percentiles (usec): 00:38:29.757 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10028], 00:38:29.757 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:38:29.757 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:38:29.757 | 99.00th=[11863], 99.50th=[49021], 99.90th=[55837], 99.95th=[56886], 00:38:29.757 | 99.99th=[57410] 00:38:29.757 bw ( KiB/s): min=24736, max=26960, per=100.00%, avg=26020.00, stdev=958.70, samples=4 00:38:29.757 iops : min= 6184, max= 6740, avg=6505.00, stdev=239.67, samples=4 00:38:29.757 write: IOPS=6392, BW=25.0MiB/s (26.2MB/s)(51.2MiB/2049msec); 0 zone resets 00:38:29.757 slat (usec): min=2, max=258, avg= 7.05, stdev= 3.00 00:38:29.757 clat (usec): min=2586, max=56173, avg=9411.38, stdev=2985.69 00:38:29.757 lat (usec): min=2602, max=56177, avg=9418.42, stdev=2985.72 00:38:29.757 clat percentiles (usec): 00:38:29.757 | 1.00th=[ 6063], 5.00th=[ 7767], 10.00th=[ 8291], 20.00th=[ 8848], 00:38:29.757 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:38:29.757 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[ 9896], 95.00th=[ 9896], 00:38:29.757 | 99.00th=[10290], 99.50th=[10683], 99.90th=[54264], 99.95th=[55313], 00:38:29.757 | 99.99th=[56361] 00:38:29.757 bw ( KiB/s): min=25928, max=26304, per=100.00%, avg=26078.00, stdev=161.77, samples=4 00:38:29.757 iops : min= 6482, max= 6576, avg=6519.50, stdev=40.44, samples=4 00:38:29.757 lat (msec) : 4=0.10%, 10=56.95%, 20=42.46%, 50=0.08%, 100=0.40% 00:38:29.757 cpu : usr=74.71%, sys=23.93%, ctx=11, majf=0, minf=6 00:38:29.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:38:29.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:29.757 issued rwts: total=13078,13098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:29.757 00:38:29.757 Run status group 0 (all jobs): 00:38:29.757 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=51.1MiB (53.6MB), run=2049-2049msec 00:38:29.757 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=51.2MiB (53.6MB), run=2049-2049msec 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:29.757 23:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:38:30.017 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:38:30.017 fio-3.35 00:38:30.017 Starting 1 thread 00:38:30.017 EAL: No free 2048 kB hugepages reported on node 1 00:38:32.548 00:38:32.548 test: (groupid=0, jobs=1): err= 0: pid=1014476: Mon Jul 22 23:19:08 2024 00:38:32.548 read: IOPS=6477, BW=101MiB/s (106MB/s)(204MiB/2013msec) 00:38:32.548 slat (usec): min=2, max=124, avg= 4.39, stdev= 2.41 00:38:32.548 clat (usec): min=2475, max=30202, avg=11248.56, stdev=2633.12 00:38:32.548 lat (usec): min=2479, max=30215, avg=11252.95, stdev=2633.43 00:38:32.548 clat percentiles (usec): 00:38:32.548 | 1.00th=[ 5735], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9503], 00:38:32.548 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:38:32.548 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14353], 95.00th=[15401], 00:38:32.548 | 99.00th=[19792], 99.50th=[24511], 99.90th=[28967], 99.95th=[29492], 00:38:32.548 | 99.99th=[30016] 00:38:32.548 bw ( KiB/s): min=43584, max=59552, per=49.61%, avg=51424.00, stdev=7838.93, samples=4 00:38:32.548 iops : min= 2724, max= 3722, avg=3214.00, stdev=489.93, samples=4 00:38:32.548 write: IOPS=3724, BW=58.2MiB/s (61.0MB/s)(105MiB/1807msec); 0 zone resets 00:38:32.548 slat (usec): min=30, max=246, avg=39.64, stdev= 9.43 00:38:32.548 clat (usec): min=8738, max=33001, avg=15494.79, stdev=2726.50 00:38:32.548 lat (usec): min=8773, max=33093, avg=15534.43, stdev=2726.87 00:38:32.548 clat percentiles (usec): 00:38:32.548 | 1.00th=[10290], 5.00th=[11338], 10.00th=[12125], 20.00th=[13173], 00:38:32.548 | 30.00th=[14091], 40.00th=[14877], 50.00th=[15401], 60.00th=[16057], 00:38:32.548 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18482], 95.00th=[19530], 00:38:32.548 | 99.00th=[24773], 99.50th=[26608], 99.90th=[28181], 99.95th=[28443], 00:38:32.548 | 99.99th=[32900] 00:38:32.548 bw ( KiB/s): min=45088, max=61088, per=89.88%, avg=53560.00, stdev=8074.97, samples=4 00:38:32.548 iops : min= 2818, max= 3818, avg=3347.50, stdev=504.69, samples=4 00:38:32.548 lat (msec) : 4=0.14%, 10=19.67%, 20=78.26%, 50=1.93% 00:38:32.548 cpu : usr=77.98%, sys=20.18%, ctx=104, majf=0, minf=25 00:38:32.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:38:32.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:32.548 issued rwts: total=13040,6730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:32.548 00:38:32.548 Run status group 0 (all jobs): 00:38:32.548 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=204MiB (214MB), run=2013-2013msec 00:38:32.548 WRITE: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=105MiB (110MB), run=1807-1807msec 00:38:32.548 23:19:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:32.808 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:38:32.808 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:38:32.808 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:38:32.808 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:32.808 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:38:32.808 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:32.808 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:33.068 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:33.068 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:38:33.068 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:38:33.068 23:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 -i 10.0.0.2 00:38:36.365 Nvme0n1 00:38:36.365 23:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:38:39.665 23:19:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=006fd478-02b0-4837-8a98-16ebed6da76d 00:38:39.665 23:19:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 006fd478-02b0-4837-8a98-16ebed6da76d 00:38:39.665 23:19:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=006fd478-02b0-4837-8a98-16ebed6da76d 00:38:39.665 23:19:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:38:39.665 23:19:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:38:39.665 23:19:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:38:39.665 23:19:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:40.235 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:38:40.235 { 00:38:40.235 "uuid": "006fd478-02b0-4837-8a98-16ebed6da76d", 00:38:40.235 "name": "lvs_0", 00:38:40.235 "base_bdev": "Nvme0n1", 00:38:40.235 "total_data_clusters": 930, 00:38:40.235 "free_clusters": 930, 00:38:40.235 "block_size": 512, 00:38:40.235 "cluster_size": 1073741824 00:38:40.235 } 00:38:40.235 ]' 00:38:40.235 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="006fd478-02b0-4837-8a98-16ebed6da76d") .free_clusters' 00:38:40.235 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:38:40.235 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="006fd478-02b0-4837-8a98-16ebed6da76d") .cluster_size' 00:38:40.495 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:38:40.495 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:38:40.495 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:38:40.495 952320 00:38:40.495 23:19:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:38:41.064 4daf15be-70aa-4346-9431-45dde2b53821 00:38:41.064 23:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:38:41.322 23:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:38:41.892 23:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:42.463 23:19:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:42.463 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:42.463 fio-3.35 00:38:42.463 Starting 1 thread 00:38:42.724 EAL: No free 2048 kB hugepages reported on node 1 00:38:45.267 00:38:45.267 test: (groupid=0, jobs=1): err= 0: pid=1015977: Mon Jul 22 23:19:21 2024 00:38:45.267 read: IOPS=4490, BW=17.5MiB/s (18.4MB/s)(35.3MiB/2010msec) 00:38:45.267 slat (usec): min=2, max=377, avg= 6.42, stdev= 5.24 00:38:45.267 clat (usec): min=1284, max=174283, avg=15406.40, stdev=13325.02 00:38:45.267 lat (usec): min=1291, max=174289, avg=15412.82, stdev=13325.40 00:38:45.267 clat percentiles (msec): 00:38:45.267 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:38:45.267 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:38:45.267 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 16], 95.00th=[ 17], 00:38:45.267 | 99.00th=[ 20], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 176], 00:38:45.267 | 99.99th=[ 176] 00:38:45.267 bw ( KiB/s): min=12896, max=19760, per=99.80%, avg=17926.00, stdev=3358.69, samples=4 00:38:45.267 iops : min= 3224, max= 4940, avg=4481.50, stdev=839.67, samples=4 00:38:45.267 write: IOPS=4488, BW=17.5MiB/s (18.4MB/s)(35.2MiB/2010msec); 0 zone resets 00:38:45.267 slat (usec): min=2, max=160, avg= 6.81, stdev= 2.33 00:38:45.267 clat (usec): min=427, max=170382, avg=12875.83, stdev=12385.04 00:38:45.267 lat (usec): min=434, max=170437, avg=12882.64, stdev=12385.25 00:38:45.267 clat percentiles (msec): 00:38:45.267 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:38:45.267 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:38:45.267 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:38:45.267 | 99.00th=[ 16], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:38:45.267 | 99.99th=[ 171] 00:38:45.267 bw ( KiB/s): min=13416, max=19648, per=99.79%, avg=17914.00, stdev=3003.37, samples=4 00:38:45.267 iops : min= 3354, max= 4912, avg=4478.50, stdev=750.84, samples=4 00:38:45.267 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:38:45.267 lat (msec) : 2=0.03%, 4=0.08%, 10=2.07%, 20=96.99%, 50=0.10% 00:38:45.267 lat (msec) : 250=0.71% 00:38:45.267 cpu : usr=69.19%, sys=28.82%, ctx=42, majf=0, minf=6 00:38:45.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:38:45.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:45.267 issued rwts: total=9026,9021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:45.267 00:38:45.267 Run status group 0 (all jobs): 00:38:45.267 READ: bw=17.5MiB/s (18.4MB/s), 17.5MiB/s-17.5MiB/s (18.4MB/s-18.4MB/s), io=35.3MiB (37.0MB), run=2010-2010msec 00:38:45.267 WRITE: bw=17.5MiB/s (18.4MB/s), 17.5MiB/s-17.5MiB/s (18.4MB/s-18.4MB/s), io=35.2MiB (36.9MB), run=2010-2010msec 00:38:45.267 23:19:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:45.527 23:19:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:38:47.432 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=858bb253-3f05-424f-aed8-e29f09c30211 00:38:47.432 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 858bb253-3f05-424f-aed8-e29f09c30211 00:38:47.432 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=858bb253-3f05-424f-aed8-e29f09c30211 00:38:47.432 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:38:47.432 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:38:47.432 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:38:47.432 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:38:47.690 { 00:38:47.690 "uuid": "006fd478-02b0-4837-8a98-16ebed6da76d", 00:38:47.690 "name": "lvs_0", 00:38:47.690 "base_bdev": "Nvme0n1", 00:38:47.690 "total_data_clusters": 930, 00:38:47.690 "free_clusters": 0, 00:38:47.690 "block_size": 512, 00:38:47.690 "cluster_size": 1073741824 00:38:47.690 }, 00:38:47.690 { 00:38:47.690 "uuid": "858bb253-3f05-424f-aed8-e29f09c30211", 00:38:47.690 "name": "lvs_n_0", 00:38:47.690 "base_bdev": "4daf15be-70aa-4346-9431-45dde2b53821", 00:38:47.690 "total_data_clusters": 237847, 00:38:47.690 "free_clusters": 237847, 00:38:47.690 "block_size": 512, 00:38:47.690 "cluster_size": 4194304 00:38:47.690 } 00:38:47.690 ]' 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="858bb253-3f05-424f-aed8-e29f09c30211") .free_clusters' 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="858bb253-3f05-424f-aed8-e29f09c30211") .cluster_size' 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:38:47.690 951388 00:38:47.690 23:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:38:49.069 b386dd7d-fa54-4738-a603-e85357a05b06 00:38:49.070 23:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:38:49.637 23:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:38:50.205 23:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:38:50.774 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:38:51.032 23:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:38:51.032 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:51.032 fio-3.35 00:38:51.032 Starting 1 thread 00:38:51.292 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.831 00:38:53.831 test: (groupid=0, jobs=1): err= 0: pid=1016998: Mon Jul 22 23:19:29 2024 00:38:53.831 read: IOPS=4314, BW=16.9MiB/s (17.7MB/s)(33.9MiB/2011msec) 00:38:53.831 slat (usec): min=2, max=144, avg= 7.06, stdev= 2.84 00:38:53.831 clat (usec): min=5755, max=24731, avg=16099.05, stdev=1387.02 00:38:53.831 lat (usec): min=5799, max=24738, avg=16106.12, stdev=1386.87 00:38:53.831 clat percentiles (usec): 00:38:53.831 | 1.00th=[12911], 5.00th=[14091], 10.00th=[14484], 20.00th=[15008], 00:38:53.831 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16057], 60.00th=[16450], 00:38:53.831 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:38:53.831 | 99.00th=[19268], 99.50th=[19530], 99.90th=[23725], 99.95th=[24249], 00:38:53.831 | 99.99th=[24773] 00:38:53.831 bw ( KiB/s): min=16216, max=17648, per=99.77%, avg=17220.00, stdev=674.87, samples=4 00:38:53.831 iops : min= 4054, max= 4412, avg=4305.00, stdev=168.72, samples=4 00:38:53.831 write: IOPS=4311, BW=16.8MiB/s (17.7MB/s)(33.9MiB/2011msec); 0 zone resets 00:38:53.831 slat (usec): min=3, max=186, avg= 7.40, stdev= 2.94 00:38:53.831 clat (usec): min=2683, max=24832, avg=13403.43, stdev=1257.33 00:38:53.831 lat (usec): min=2690, max=24840, avg=13410.83, stdev=1257.31 00:38:53.831 clat percentiles (usec): 00:38:53.831 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:38:53.831 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:38:53.831 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14746], 95.00th=[15270], 00:38:53.831 | 99.00th=[16319], 99.50th=[16909], 99.90th=[22938], 99.95th=[24511], 00:38:53.831 | 99.99th=[24773] 00:38:53.831 bw ( KiB/s): min=17040, max=17328, per=99.85%, avg=17222.00, stdev=128.89, samples=4 00:38:53.831 iops : min= 4260, max= 4332, avg=4305.50, stdev=32.22, samples=4 00:38:53.831 lat (msec) : 4=0.01%, 10=0.32%, 20=99.33%, 50=0.34% 00:38:53.831 cpu : usr=71.54%, sys=26.32%, ctx=46, majf=0, minf=6 00:38:53.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:38:53.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:53.831 issued rwts: total=8677,8671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:53.831 00:38:53.831 Run status group 0 (all jobs): 00:38:53.831 READ: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=33.9MiB (35.5MB), run=2011-2011msec 00:38:53.831 WRITE: bw=16.8MiB/s (17.7MB/s), 16.8MiB/s-16.8MiB/s (17.7MB/s-17.7MB/s), io=33.9MiB (35.5MB), run=2011-2011msec 00:38:53.831 23:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:38:54.091 23:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:38:54.091 23:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:38:59.366 23:19:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:38:59.366 23:19:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:39:02.664 23:19:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:39:02.664 23:19:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:04.573 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:04.573 rmmod nvme_tcp 00:39:04.833 rmmod nvme_fabrics 00:39:04.833 rmmod nvme_keyring 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1013533 ']' 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1013533 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1013533 ']' 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1013533 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1013533 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1013533' 00:39:04.833 killing process with pid 1013533 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1013533 00:39:04.833 23:19:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1013533 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:05.094 23:19:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.634 23:19:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:07.634 00:39:07.634 real 0m47.449s 00:39:07.634 user 3m6.139s 00:39:07.634 sys 0m8.602s 00:39:07.634 23:19:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:07.634 23:19:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.634 ************************************ 00:39:07.634 END TEST nvmf_fio_host 00:39:07.634 ************************************ 00:39:07.634 23:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:07.634 23:19:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.635 ************************************ 00:39:07.635 START TEST nvmf_failover 00:39:07.635 ************************************ 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:07.635 * Looking for test storage... 00:39:07.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:39:07.635 23:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.927 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:10.928 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:10.928 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:10.928 Found net devices under 0000:84:00.0: cvl_0_0 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:10.928 Found net devices under 0000:84:00.1: cvl_0_1 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:10.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:39:10.928 00:39:10.928 --- 10.0.0.2 ping statistics --- 00:39:10.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.928 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:39:10.928 00:39:10.928 --- 10.0.0.1 ping statistics --- 00:39:10.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.928 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1020510 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1020510 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1020510 ']' 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:10.928 23:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:10.928 [2024-07-22 23:19:47.018821] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:39:10.928 [2024-07-22 23:19:47.018987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:10.928 EAL: No free 2048 kB hugepages reported on node 1 00:39:10.928 [2024-07-22 23:19:47.150477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:11.187 [2024-07-22 23:19:47.262768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.187 [2024-07-22 23:19:47.262835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.187 [2024-07-22 23:19:47.262855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.187 [2024-07-22 23:19:47.262871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.187 [2024-07-22 23:19:47.262886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.187 [2024-07-22 23:19:47.263301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.187 [2024-07-22 23:19:47.263382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:11.187 [2024-07-22 23:19:47.263386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.187 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:11.187 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:39:11.187 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:11.187 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:11.187 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:11.187 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:11.187 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:11.445 [2024-07-22 23:19:47.709162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:11.445 23:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:12.013 Malloc0 00:39:12.013 23:19:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:12.272 23:19:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:12.840 23:19:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:13.098 [2024-07-22 23:19:49.281753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.098 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:13.356 [2024-07-22 23:19:49.578655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:13.356 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:13.614 [2024-07-22 23:19:49.879726] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1020804 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1020804 /var/tmp/bdevperf.sock 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1020804 ']' 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:13.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:13.614 23:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:14.184 23:19:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:14.184 23:19:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:39:14.184 23:19:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:15.119 NVMe0n1 00:39:15.119 23:19:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:15.377 00:39:15.377 23:19:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1021062 00:39:15.377 23:19:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:15.377 23:19:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:39:16.314 23:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:16.881 [2024-07-22 23:19:52.963557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.881 [2024-07-22 23:19:52.963819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.882 [2024-07-22 23:19:52.963835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.882 [2024-07-22 23:19:52.963852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.882 [2024-07-22 23:19:52.963868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.882 [2024-07-22 23:19:52.963884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.882 [2024-07-22 23:19:52.963901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473790 is same with the state(5) to be set 00:39:16.882 23:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:39:20.173 23:19:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:20.432 00:39:20.432 23:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:20.692 [2024-07-22 23:19:56.812544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.812986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.813003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.813019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.813035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 [2024-07-22 23:19:56.813052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14745d0 is same with the state(5) to be set 00:39:20.692 23:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:39:24.007 23:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.007 [2024-07-22 23:20:00.093156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.007 23:20:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:39:24.947 23:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:25.515 [2024-07-22 23:20:01.667536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.515 [2024-07-22 23:20:01.667623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.515 [2024-07-22 23:20:01.667645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.515 [2024-07-22 23:20:01.667662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.515 [2024-07-22 23:20:01.667680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.515 [2024-07-22 23:20:01.667708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 [2024-07-22 23:20:01.667949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475960 is same with the state(5) to be set 00:39:25.516 23:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1021062 00:39:30.792 0 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1020804 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1020804 ']' 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1020804 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1020804 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1020804' 00:39:30.792 killing process with pid 1020804 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1020804 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1020804 00:39:30.792 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:30.792 [2024-07-22 23:19:49.944181] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:39:30.792 [2024-07-22 23:19:49.944271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020804 ] 00:39:30.792 EAL: No free 2048 kB hugepages reported on node 1 00:39:30.792 [2024-07-22 23:19:50.014864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.792 [2024-07-22 23:19:50.123541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.792 Running I/O for 15 seconds... 00:39:30.792 [2024-07-22 23:19:52.964792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.792 [2024-07-22 23:19:52.964849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.792 [2024-07-22 23:19:52.964887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.792 [2024-07-22 23:19:52.964910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.792 [2024-07-22 23:19:52.964934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.792 [2024-07-22 23:19:52.964954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.792 [2024-07-22 23:19:52.964975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.792 [2024-07-22 23:19:52.964995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.965967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.965986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.793 [2024-07-22 23:19:52.966615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.793 [2024-07-22 23:19:52.966636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.966961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.966979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.794 [2024-07-22 23:19:52.967830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.967871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.967916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.967957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.967978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.967997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.968018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.968037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.968058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.968076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.968097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.968116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.968136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.968155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.968175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.794 [2024-07-22 23:19:52.968193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.794 [2024-07-22 23:19:52.968214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.968965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.968985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.795 [2024-07-22 23:19:52.969004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.795 [2024-07-22 23:19:52.969044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.795 [2024-07-22 23:19:52.969083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.795 [2024-07-22 23:19:52.969737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.795 [2024-07-22 23:19:52.969775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.795 [2024-07-22 23:19:52.969797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:52.969815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.969837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:52.969855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.969876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:52.969894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.969915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:52.969939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.969961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:52.969980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.970001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:52.970019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.970059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.796 [2024-07-22 23:19:52.970080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.796 [2024-07-22 23:19:52.970097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74928 len:8 PRP1 0x0 PRP2 0x0 00:39:30.796 [2024-07-22 23:19:52.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.970193] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7feb60 was disconnected and freed. reset controller. 00:39:30.796 [2024-07-22 23:19:52.970217] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:39:30.796 [2024-07-22 23:19:52.970263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.796 [2024-07-22 23:19:52.970287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.970317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.796 [2024-07-22 23:19:52.970339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.970358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.796 [2024-07-22 23:19:52.970376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.970394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.796 [2024-07-22 23:19:52.970412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:52.970429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:30.796 [2024-07-22 23:19:52.970506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7cabb0 (9): Bad file descriptor 00:39:30.796 [2024-07-22 23:19:52.974917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:30.796 [2024-07-22 23:19:53.025934] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:30.796 [2024-07-22 23:19:56.813614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.813709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.813762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.813804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.813844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.813884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.813924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.813963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.813982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.796 [2024-07-22 23:19:56.814677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.796 [2024-07-22 23:19:56.814698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.814716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.814736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.814755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.814781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.814801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.814821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.814840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.814861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.814880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.814901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.814920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.814941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.814959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.814980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.797 [2024-07-22 23:19:56.815940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.797 [2024-07-22 23:19:56.815962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.815980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.816973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.798 [2024-07-22 23:19:56.817465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.798 [2024-07-22 23:19:56.817484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.817964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.817984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.799 [2024-07-22 23:19:56.818570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.799 [2024-07-22 23:19:56.818651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27352 len:8 PRP1 0x0 PRP2 0x0 00:39:30.799 [2024-07-22 23:19:56.818670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.799 [2024-07-22 23:19:56.818710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.799 [2024-07-22 23:19:56.818726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27360 len:8 PRP1 0x0 PRP2 0x0 00:39:30.799 [2024-07-22 23:19:56.818744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.799 [2024-07-22 23:19:56.818778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.799 [2024-07-22 23:19:56.818793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27368 len:8 PRP1 0x0 PRP2 0x0 00:39:30.799 [2024-07-22 23:19:56.818811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.799 [2024-07-22 23:19:56.818844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.799 [2024-07-22 23:19:56.818860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27376 len:8 PRP1 0x0 PRP2 0x0 00:39:30.799 [2024-07-22 23:19:56.818878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.799 [2024-07-22 23:19:56.818911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.799 [2024-07-22 23:19:56.818926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27384 len:8 PRP1 0x0 PRP2 0x0 00:39:30.799 [2024-07-22 23:19:56.818949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.818968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.799 [2024-07-22 23:19:56.818984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.799 [2024-07-22 23:19:56.819000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27392 len:8 PRP1 0x0 PRP2 0x0 00:39:30.799 [2024-07-22 23:19:56.819018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.819036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.799 [2024-07-22 23:19:56.819051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.799 [2024-07-22 23:19:56.819067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26824 len:8 PRP1 0x0 PRP2 0x0 00:39:30.799 [2024-07-22 23:19:56.819085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.799 [2024-07-22 23:19:56.819160] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7f7810 was disconnected and freed. reset controller. 00:39:30.799 [2024-07-22 23:19:56.819184] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:39:30.800 [2024-07-22 23:19:56.819230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.800 [2024-07-22 23:19:56.819254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:19:56.819275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.800 [2024-07-22 23:19:56.819293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:19:56.819322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.800 [2024-07-22 23:19:56.819342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:19:56.819362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.800 [2024-07-22 23:19:56.819380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:19:56.819398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:30.800 [2024-07-22 23:19:56.823832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:30.800 [2024-07-22 23:19:56.823884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7cabb0 (9): Bad file descriptor 00:39:30.800 [2024-07-22 23:19:56.945981] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:30.800 [2024-07-22 23:20:01.669070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.669970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.669991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.800 [2024-07-22 23:20:01.670462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.800 [2024-07-22 23:20:01.670482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.801 [2024-07-22 23:20:01.670940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.670961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.670979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.801 [2024-07-22 23:20:01.671740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.801 [2024-07-22 23:20:01.671760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.671783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.671805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.671824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.671844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.671862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.671883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.671901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.671922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.671941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.671962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.671980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.672963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.672984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.673023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.673062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.673101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.673140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.673180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.673219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.802 [2024-07-22 23:20:01.673258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.802 [2024-07-22 23:20:01.673277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.803 [2024-07-22 23:20:01.673344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.803 [2024-07-22 23:20:01.673385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:30.803 [2024-07-22 23:20:01.673425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.673962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.673980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.674019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.674058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.674098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.674137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.674177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.674216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:30.803 [2024-07-22 23:20:01.674256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:30.803 [2024-07-22 23:20:01.674324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:30.803 [2024-07-22 23:20:01.674343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42032 len:8 PRP1 0x0 PRP2 0x0 00:39:30.803 [2024-07-22 23:20:01.674361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674440] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7ff950 was disconnected and freed. reset controller. 00:39:30.803 [2024-07-22 23:20:01.674466] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:39:30.803 [2024-07-22 23:20:01.674511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.803 [2024-07-22 23:20:01.674535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.803 [2024-07-22 23:20:01.674574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.803 [2024-07-22 23:20:01.674611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:30.803 [2024-07-22 23:20:01.674647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:30.803 [2024-07-22 23:20:01.674665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:30.803 [2024-07-22 23:20:01.674713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7cabb0 (9): Bad file descriptor 00:39:30.803 [2024-07-22 23:20:01.679130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:30.803 [2024-07-22 23:20:01.768681] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:30.803 00:39:30.803 Latency(us) 00:39:30.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.803 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:30.803 Verification LBA range: start 0x0 length 0x4000 00:39:30.803 NVMe0n1 : 15.01 6530.33 25.51 492.37 0.00 18186.95 734.25 23301.69 00:39:30.803 =================================================================================================================== 00:39:30.803 Total : 6530.33 25.51 492.37 0.00 18186.95 734.25 23301.69 00:39:30.803 Received shutdown signal, test time was about 15.000000 seconds 00:39:30.803 00:39:30.803 Latency(us) 00:39:30.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.803 =================================================================================================================== 00:39:30.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:30.803 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:39:30.803 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:39:30.803 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:39:30.803 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1022780 00:39:30.803 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:39:30.803 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1022780 /var/tmp/bdevperf.sock 00:39:30.803 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1022780 ']' 00:39:30.804 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:30.804 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:30.804 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:30.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:30.804 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:30.804 23:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:31.062 23:20:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:31.062 23:20:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:39:31.062 23:20:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:31.320 [2024-07-22 23:20:07.582408] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:31.320 23:20:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:31.579 [2024-07-22 23:20:07.887342] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:31.837 23:20:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:32.095 NVMe0n1 00:39:32.353 23:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:32.610 00:39:32.610 23:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:33.178 00:39:33.178 23:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:33.178 23:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:39:33.436 23:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:33.694 23:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:39:36.990 23:20:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:36.990 23:20:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:39:36.990 23:20:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1023448 00:39:36.990 23:20:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:36.990 23:20:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1023448 00:39:38.366 0 00:39:38.366 23:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:38.366 [2024-07-22 23:20:06.955291] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:39:38.366 [2024-07-22 23:20:06.955421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022780 ] 00:39:38.366 EAL: No free 2048 kB hugepages reported on node 1 00:39:38.366 [2024-07-22 23:20:07.060340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.366 [2024-07-22 23:20:07.171523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.366 [2024-07-22 23:20:09.911539] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:39:38.366 [2024-07-22 23:20:09.911634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.366 [2024-07-22 23:20:09.911665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.366 [2024-07-22 23:20:09.911687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.366 [2024-07-22 23:20:09.911706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.367 [2024-07-22 23:20:09.911725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.367 [2024-07-22 23:20:09.911743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.367 [2024-07-22 23:20:09.911762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:38.367 [2024-07-22 23:20:09.911780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:38.367 [2024-07-22 23:20:09.911808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.367 [2024-07-22 23:20:09.911866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.367 [2024-07-22 23:20:09.911908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2fbb0 (9): Bad file descriptor 00:39:38.367 [2024-07-22 23:20:10.015491] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:38.367 Running I/O for 1 seconds... 00:39:38.367 00:39:38.367 Latency(us) 00:39:38.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.367 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:38.367 Verification LBA range: start 0x0 length 0x4000 00:39:38.367 NVMe0n1 : 1.01 6630.95 25.90 0.00 0.00 19205.57 4223.43 16602.45 00:39:38.367 =================================================================================================================== 00:39:38.367 Total : 6630.95 25.90 0.00 0.00 19205.57 4223.43 16602.45 00:39:38.367 23:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:38.367 23:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:39:38.935 23:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:39.504 23:20:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:39.504 23:20:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:39:40.070 23:20:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:40.329 23:20:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:39:43.622 23:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:43.622 23:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:39:43.880 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1022780 00:39:43.880 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1022780 ']' 00:39:43.880 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1022780 00:39:43.880 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:39:43.880 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:43.880 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1022780 00:39:43.881 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:43.881 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:43.881 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1022780' 00:39:43.881 killing process with pid 1022780 00:39:43.881 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1022780 00:39:43.881 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1022780 00:39:44.139 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:39:44.139 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:44.707 rmmod nvme_tcp 00:39:44.707 rmmod nvme_fabrics 00:39:44.707 rmmod nvme_keyring 00:39:44.707 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1020510 ']' 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1020510 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1020510 ']' 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1020510 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1020510 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1020510' 00:39:44.708 killing process with pid 1020510 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1020510 00:39:44.708 23:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1020510 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.966 23:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:47.551 00:39:47.551 real 0m39.830s 00:39:47.551 user 2m19.600s 00:39:47.551 sys 0m7.665s 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:47.551 ************************************ 00:39:47.551 END TEST nvmf_failover 00:39:47.551 ************************************ 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.551 ************************************ 00:39:47.551 START TEST nvmf_host_discovery 00:39:47.551 ************************************ 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:39:47.551 * Looking for test storage... 00:39:47.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:39:47.551 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:39:47.552 23:20:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:50.091 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:50.092 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:50.092 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:50.092 Found net devices under 0000:84:00.0: cvl_0_0 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:50.092 Found net devices under 0000:84:00.1: cvl_0_1 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:50.092 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:50.352 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:50.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:50.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:39:50.353 00:39:50.353 --- 10.0.0.2 ping statistics --- 00:39:50.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.353 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:50.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:50.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:39:50.353 00:39:50.353 --- 10.0.0.1 ping statistics --- 00:39:50.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.353 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1026317 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1026317 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1026317 ']' 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:50.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:50.353 23:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.613 [2024-07-22 23:20:26.688903] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:39:50.613 [2024-07-22 23:20:26.689042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:50.613 EAL: No free 2048 kB hugepages reported on node 1 00:39:50.613 [2024-07-22 23:20:26.800237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:50.613 [2024-07-22 23:20:26.910382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:50.613 [2024-07-22 23:20:26.910453] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:50.613 [2024-07-22 23:20:26.910473] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:50.613 [2024-07-22 23:20:26.910490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:50.613 [2024-07-22 23:20:26.910504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:50.613 [2024-07-22 23:20:26.910541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.873 [2024-07-22 23:20:27.081248] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.873 [2024-07-22 23:20:27.089462] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.873 null0 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.873 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.874 null1 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1026372 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1026372 /tmp/host.sock 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1026372 ']' 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:39:50.874 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:50.874 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.874 [2024-07-22 23:20:27.176324] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:39:50.874 [2024-07-22 23:20:27.176415] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026372 ] 00:39:51.134 EAL: No free 2048 kB hugepages reported on node 1 00:39:51.134 [2024-07-22 23:20:27.258411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.134 [2024-07-22 23:20:27.369660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:51.704 23:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.964 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:39:51.964 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:39:51.964 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.965 [2024-07-22 23:20:28.156382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.965 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:39:52.225 23:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:39:52.795 [2024-07-22 23:20:28.800480] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:39:52.795 [2024-07-22 23:20:28.800515] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:39:52.795 [2024-07-22 23:20:28.800545] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:52.795 [2024-07-22 23:20:28.888856] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:39:53.055 [2024-07-22 23:20:29.114794] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:39:53.055 [2024-07-22 23:20:29.114827] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.315 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:53.316 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.576 [2024-07-22 23:20:29.849520] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:53.576 [2024-07-22 23:20:29.850034] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:39:53.576 [2024-07-22 23:20:29.850080] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:53.576 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:53.836 [2024-07-22 23:20:29.936834] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:53.836 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:53.837 23:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.837 23:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:39:53.837 23:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:39:54.097 [2024-07-22 23:20:30.196201] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:39:54.097 [2024-07-22 23:20:30.196237] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:39:54.097 [2024-07-22 23:20:30.196251] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:55.038 [2024-07-22 23:20:31.146356] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:39:55.038 [2024-07-22 23:20:31.146401] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:55.038 [2024-07-22 23:20:31.146624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:55.038 [2024-07-22 23:20:31.146663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:55.038 [2024-07-22 23:20:31.146687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:55.038 [2024-07-22 23:20:31.146706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:55.038 [2024-07-22 23:20:31.146725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:55.038 [2024-07-22 23:20:31.146753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:55.038 [2024-07-22 23:20:31.146773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:55.038 [2024-07-22 23:20:31.146792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:55.038 [2024-07-22 23:20:31.146809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:55.038 [2024-07-22 23:20:31.156619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.038 [2024-07-22 23:20:31.166672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.038 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.038 [2024-07-22 23:20:31.167001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.038 [2024-07-22 23:20:31.167042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.038 [2024-07-22 23:20:31.167064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.038 [2024-07-22 23:20:31.167095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.038 [2024-07-22 23:20:31.167123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.038 [2024-07-22 23:20:31.167142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.038 [2024-07-22 23:20:31.167162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.038 [2024-07-22 23:20:31.167188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.038 [2024-07-22 23:20:31.176765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.038 [2024-07-22 23:20:31.177035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.038 [2024-07-22 23:20:31.177073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.038 [2024-07-22 23:20:31.177094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.038 [2024-07-22 23:20:31.177122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.038 [2024-07-22 23:20:31.177158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.038 [2024-07-22 23:20:31.177177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.177194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.177218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 [2024-07-22 23:20:31.186851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 [2024-07-22 23:20:31.187120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.187157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 [2024-07-22 23:20:31.187178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.187206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.187233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.039 [2024-07-22 23:20:31.187251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.187270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.187294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 [2024-07-22 23:20:31.196938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 [2024-07-22 23:20:31.197208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.197245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 [2024-07-22 23:20:31.197266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.197295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.197334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.039 [2024-07-22 23:20:31.197354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.197372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.197397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 [2024-07-22 23:20:31.207027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:55.039 [2024-07-22 23:20:31.207247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.207286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:55.039 [2024-07-22 23:20:31.207318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.207360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.207387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.039 [2024-07-22 23:20:31.207412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.207430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.207456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:55.039 [2024-07-22 23:20:31.217115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 [2024-07-22 23:20:31.217318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.217366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 [2024-07-22 23:20:31.217387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.217416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.217444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.039 [2024-07-22 23:20:31.217462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.217479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.217504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 [2024-07-22 23:20:31.227205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 [2024-07-22 23:20:31.227390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.227428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 [2024-07-22 23:20:31.227449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.227478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.227505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.039 [2024-07-22 23:20:31.227523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.227540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.227565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 [2024-07-22 23:20:31.237291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 [2024-07-22 23:20:31.237479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.237517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 [2024-07-22 23:20:31.237538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.237568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.237595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.039 [2024-07-22 23:20:31.237613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.237631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.237655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.039 [2024-07-22 23:20:31.247386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 [2024-07-22 23:20:31.247553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.247589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 [2024-07-22 23:20:31.247610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.247639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.247665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.039 [2024-07-22 23:20:31.247683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.039 [2024-07-22 23:20:31.247701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.039 [2024-07-22 23:20:31.247725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:55.039 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:39:55.039 [2024-07-22 23:20:31.257470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.039 [2024-07-22 23:20:31.257674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.039 [2024-07-22 23:20:31.257710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.039 [2024-07-22 23:20:31.257731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.039 [2024-07-22 23:20:31.257760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.039 [2024-07-22 23:20:31.257793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.040 [2024-07-22 23:20:31.257812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.040 [2024-07-22 23:20:31.257829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.040 [2024-07-22 23:20:31.257854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:55.040 [2024-07-22 23:20:31.267556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:55.040 [2024-07-22 23:20:31.267801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:55.040 [2024-07-22 23:20:31.267838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bc8f0 with addr=10.0.0.2, port=4420 00:39:55.040 [2024-07-22 23:20:31.267860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bc8f0 is same with the state(5) to be set 00:39:55.040 [2024-07-22 23:20:31.267889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bc8f0 (9): Bad file descriptor 00:39:55.040 [2024-07-22 23:20:31.267916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:55.040 [2024-07-22 23:20:31.267935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:55.040 [2024-07-22 23:20:31.267952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:55.040 [2024-07-22 23:20:31.267977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.040 [2024-07-22 23:20:31.273033] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:39:55.040 [2024-07-22 23:20:31.273070] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:39:55.040 23:20:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:56.437 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.438 23:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:57.818 [2024-07-22 23:20:33.694253] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:39:57.818 [2024-07-22 23:20:33.694295] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:39:57.818 [2024-07-22 23:20:33.694336] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:39:57.818 [2024-07-22 23:20:33.781591] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:39:57.818 [2024-07-22 23:20:34.051207] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:39:57.818 [2024-07-22 23:20:34.051268] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:57.818 request: 00:39:57.818 { 00:39:57.818 "name": "nvme", 00:39:57.818 "trtype": "tcp", 00:39:57.818 "traddr": "10.0.0.2", 00:39:57.818 "adrfam": "ipv4", 00:39:57.818 "trsvcid": "8009", 00:39:57.818 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:57.818 "wait_for_attach": true, 00:39:57.818 "method": "bdev_nvme_start_discovery", 00:39:57.818 "req_id": 1 00:39:57.818 } 00:39:57.818 Got JSON-RPC error response 00:39:57.818 response: 00:39:57.818 { 00:39:57.818 "code": -17, 00:39:57.818 "message": "File exists" 00:39:57.818 } 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:57.818 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:58.083 request: 00:39:58.083 { 00:39:58.083 "name": "nvme_second", 00:39:58.083 "trtype": "tcp", 00:39:58.083 "traddr": "10.0.0.2", 00:39:58.083 "adrfam": "ipv4", 00:39:58.083 "trsvcid": "8009", 00:39:58.083 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:58.083 "wait_for_attach": true, 00:39:58.083 "method": "bdev_nvme_start_discovery", 00:39:58.083 "req_id": 1 00:39:58.083 } 00:39:58.083 Got JSON-RPC error response 00:39:58.083 response: 00:39:58.083 { 00:39:58.083 "code": -17, 00:39:58.083 "message": "File exists" 00:39:58.083 } 00:39:58.083 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:58.084 23:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:59.466 [2024-07-22 23:20:35.355103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:59.466 [2024-07-22 23:20:35.355160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf760 with addr=10.0.0.2, port=8010 00:39:59.466 [2024-07-22 23:20:35.355194] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:59.466 [2024-07-22 23:20:35.355212] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:59.466 [2024-07-22 23:20:35.355228] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:40:00.405 [2024-07-22 23:20:36.357539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:00.405 [2024-07-22 23:20:36.357586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf760 with addr=10.0.0.2, port=8010 00:40:00.405 [2024-07-22 23:20:36.357612] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:00.405 [2024-07-22 23:20:36.357629] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:00.405 [2024-07-22 23:20:36.357645] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:40:01.346 [2024-07-22 23:20:37.359695] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:40:01.346 request: 00:40:01.346 { 00:40:01.346 "name": "nvme_second", 00:40:01.346 "trtype": "tcp", 00:40:01.346 "traddr": "10.0.0.2", 00:40:01.346 "adrfam": "ipv4", 00:40:01.346 "trsvcid": "8010", 00:40:01.346 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:01.346 "wait_for_attach": false, 00:40:01.346 "attach_timeout_ms": 3000, 00:40:01.346 "method": "bdev_nvme_start_discovery", 00:40:01.346 "req_id": 1 00:40:01.346 } 00:40:01.346 Got JSON-RPC error response 00:40:01.346 response: 00:40:01.346 { 00:40:01.346 "code": -110, 00:40:01.346 "message": "Connection timed out" 00:40:01.346 } 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1026372 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:01.346 rmmod nvme_tcp 00:40:01.346 rmmod nvme_fabrics 00:40:01.346 rmmod nvme_keyring 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1026317 ']' 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1026317 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1026317 ']' 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1026317 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1026317 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1026317' 00:40:01.346 killing process with pid 1026317 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1026317 00:40:01.346 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1026317 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.606 23:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:04.149 00:40:04.149 real 0m16.479s 00:40:04.149 user 0m24.479s 00:40:04.149 sys 0m4.150s 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:04.149 ************************************ 00:40:04.149 END TEST nvmf_host_discovery 00:40:04.149 ************************************ 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:04.149 ************************************ 00:40:04.149 START TEST nvmf_host_multipath_status 00:40:04.149 ************************************ 00:40:04.149 23:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:04.149 * Looking for test storage... 00:40:04.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.149 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:40:04.150 23:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:40:07.445 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:07.446 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:07.446 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:07.446 Found net devices under 0000:84:00.0: cvl_0_0 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:07.446 Found net devices under 0000:84:00.1: cvl_0_1 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:07.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:07.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:40:07.446 00:40:07.446 --- 10.0.0.2 ping statistics --- 00:40:07.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.446 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:07.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:07.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:40:07.446 00:40:07.446 --- 10.0.0.1 ping statistics --- 00:40:07.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.446 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1029762 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1029762 00:40:07.446 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1029762 ']' 00:40:07.447 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:07.447 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:07.447 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:07.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:07.447 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:07.447 23:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:07.447 [2024-07-22 23:20:43.373914] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:40:07.447 [2024-07-22 23:20:43.374015] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:07.447 EAL: No free 2048 kB hugepages reported on node 1 00:40:07.447 [2024-07-22 23:20:43.479646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:07.447 [2024-07-22 23:20:43.627999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:07.447 [2024-07-22 23:20:43.628105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:07.447 [2024-07-22 23:20:43.628140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:07.447 [2024-07-22 23:20:43.628169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:07.447 [2024-07-22 23:20:43.628195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:07.447 [2024-07-22 23:20:43.628386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.447 [2024-07-22 23:20:43.628394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1029762 00:40:08.390 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:08.653 [2024-07-22 23:20:44.787052] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:08.653 23:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:09.236 Malloc0 00:40:09.236 23:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:40:10.177 23:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:10.746 23:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:11.006 [2024-07-22 23:20:47.311558] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.265 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:11.833 [2024-07-22 23:20:47.941470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1030207 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1030207 /var/tmp/bdevperf.sock 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1030207 ']' 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:11.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:11.833 23:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:12.090 23:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:12.090 23:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:40:12.090 23:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:40:12.349 23:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:40:13.284 Nvme0n1 00:40:13.284 23:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:40:13.852 Nvme0n1 00:40:13.852 23:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:40:13.852 23:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:40:15.758 23:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:40:15.758 23:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:16.328 23:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:16.899 23:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:40:18.284 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:40:18.284 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:18.284 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:18.284 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:18.544 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:18.544 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:18.544 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:18.544 23:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:19.115 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:19.115 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:19.115 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:19.115 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:19.684 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:19.684 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:19.684 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:19.684 23:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:20.255 23:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.255 23:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:20.255 23:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.255 23:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:20.825 23:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.825 23:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:20.825 23:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.825 23:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:21.395 23:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:21.395 23:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:40:21.395 23:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:21.965 23:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:22.536 23:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:40:23.918 23:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:40:23.918 23:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:23.918 23:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:23.918 23:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:24.179 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:24.179 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:24.179 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:24.179 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:24.747 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:24.747 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:24.747 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:24.747 23:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:25.005 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:25.005 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:25.005 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:25.005 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:25.263 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:25.263 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:25.263 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:25.263 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:25.521 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:25.521 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:25.521 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:25.521 23:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:26.091 23:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:26.091 23:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:40:26.091 23:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:26.661 23:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:27.232 23:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:40:28.195 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:40:28.195 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:28.195 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:28.195 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:28.775 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:28.775 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:28.775 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:28.775 23:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:29.343 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:29.343 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:29.343 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:29.343 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:29.603 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:29.603 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:29.603 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:29.603 23:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:30.170 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:30.170 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:30.170 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:30.170 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:30.739 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:30.739 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:30.739 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:30.740 23:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:31.309 23:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:31.309 23:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:40:31.309 23:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:31.879 23:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:32.447 23:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:40:33.383 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:40:33.383 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:33.383 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:33.383 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:33.641 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:33.641 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:33.641 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:33.641 23:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:34.209 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:34.209 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:34.209 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:34.209 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:34.467 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:34.467 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:34.467 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:34.467 23:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:35.038 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:35.038 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:35.038 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:35.038 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:35.608 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:35.608 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:35.608 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:35.608 23:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:36.179 23:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:36.179 23:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:40:36.179 23:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:36.747 23:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:37.007 23:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:40:38.386 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:40:38.386 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:38.386 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.386 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:38.645 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:38.645 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:38.645 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.645 23:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:38.902 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:38.902 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:38.902 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.902 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:39.159 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:39.159 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:39.159 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:39.159 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:39.723 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:39.723 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:39.723 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:39.723 23:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:39.982 23:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:39.982 23:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:39.982 23:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:39.982 23:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:40.551 23:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:40.551 23:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:40:40.552 23:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:41.122 23:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:41.691 23:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:40:42.631 23:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:40:42.631 23:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:42.631 23:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:42.631 23:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:43.199 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:43.199 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:43.199 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.199 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:43.767 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:43.767 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:43.767 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:43.767 23:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.025 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.025 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:44.025 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.025 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:44.283 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.283 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:44.283 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.283 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:44.540 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:44.540 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:44.540 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.540 23:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:45.109 23:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:45.109 23:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:40:45.680 23:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:40:45.680 23:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:46.704 23:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:46.964 23:21:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:40:47.900 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:40:47.900 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:47.900 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:47.900 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:48.157 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:48.157 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:48.157 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:48.157 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:48.726 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:48.726 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:48.726 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:48.726 23:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:49.295 23:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.295 23:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:49.295 23:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.295 23:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:49.866 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.866 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:49.866 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:49.866 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:50.432 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:50.432 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:50.432 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:50.432 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:50.693 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:50.693 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:40:50.693 23:21:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:51.260 23:21:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:51.517 23:21:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:40:52.453 23:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:40:52.453 23:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:52.453 23:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.453 23:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:53.023 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:53.023 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:53.023 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:53.023 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:53.962 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:53.962 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:53.962 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:53.962 23:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:54.220 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:54.220 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:54.220 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:54.220 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:54.479 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:54.479 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:54.479 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:54.479 23:21:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:55.050 23:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.050 23:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:55.050 23:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.050 23:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:55.617 23:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.617 23:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:40:55.617 23:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:55.876 23:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:56.440 23:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:40:57.377 23:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:40:57.377 23:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:57.377 23:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.377 23:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:57.945 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:57.945 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:57.945 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.945 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:58.203 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.203 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:58.203 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.203 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:58.460 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.460 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:58.460 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.460 23:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:58.718 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.718 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:58.718 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.718 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:59.286 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:59.286 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:59.286 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:59.286 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:59.545 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:59.545 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:40:59.545 23:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:41:00.112 23:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:41:00.369 23:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:41:01.303 23:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:41:01.303 23:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:41:01.303 23:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:01.303 23:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:41:01.872 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:01.872 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:41:01.872 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:41:01.872 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:02.439 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:02.439 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:41:02.439 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:02.439 23:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:41:03.007 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.007 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:41:03.007 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:41:03.007 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.264 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.264 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:41:03.264 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.264 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:41:03.854 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.854 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:41:03.854 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.854 23:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1030207 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1030207 ']' 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1030207 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030207 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030207' 00:41:04.129 killing process with pid 1030207 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1030207 00:41:04.129 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1030207 00:41:04.388 Connection closed with partial response: 00:41:04.388 00:41:04.388 00:41:04.664 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1030207 00:41:04.664 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:41:04.664 [2024-07-22 23:20:48.011121] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:41:04.664 [2024-07-22 23:20:48.011214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030207 ] 00:41:04.664 EAL: No free 2048 kB hugepages reported on node 1 00:41:04.664 [2024-07-22 23:20:48.084857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.664 [2024-07-22 23:20:48.191555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:04.664 Running I/O for 90 seconds... 00:41:04.664 [2024-07-22 23:21:12.941719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.941790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.941842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.941867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.941899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.941923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.941954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.941977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.664 [2024-07-22 23:21:12.942635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.664 [2024-07-22 23:21:12.942657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.942687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.942740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.942762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.942791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.942813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.942843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.942865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.942894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.942917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.942947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.942969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.942999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.943407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.943989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.944974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.944996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.945026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.945047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.945077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.945099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.945129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.945151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.945181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.945203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.945234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.665 [2024-07-22 23:21:12.945256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.665 [2024-07-22 23:21:12.945286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.666 [2024-07-22 23:21:12.945316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.945350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.945373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.945403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.945426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.945455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.945478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.945507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.945530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.945559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.945587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.945617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.945691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.666 [2024-07-22 23:21:12.947093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.947920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.947942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.948998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.666 [2024-07-22 23:21:12.949661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.666 [2024-07-22 23:21:12.949684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.949713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.949735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.949765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.949788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.949817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.949839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.949875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.949898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.949928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.949951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.949980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.950949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.950978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.951463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.951485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.952191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.667 [2024-07-22 23:21:12.952222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.952259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.952284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.952325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.952350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.952380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.667 [2024-07-22 23:21:12.952402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.667 [2024-07-22 23:21:12.952432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.952956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.952979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.668 [2024-07-22 23:21:12.953826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.953929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.953968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.953991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.954021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.954043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.954073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.954095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.954125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.954147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.954176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.954198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.668 [2024-07-22 23:21:12.954227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.668 [2024-07-22 23:21:12.954250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.954975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.954997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.955048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.955099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.955159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.669 [2024-07-22 23:21:12.955632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.955975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.955997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.956026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.956047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.956077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.669 [2024-07-22 23:21:12.956099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.669 [2024-07-22 23:21:12.957138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.957969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.957997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.670 [2024-07-22 23:21:12.958443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.958959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.958989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.959010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.959039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.959061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.959090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.959111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.670 [2024-07-22 23:21:12.959140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.670 [2024-07-22 23:21:12.959162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.959869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.959891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.960630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.960668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.960711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.671 [2024-07-22 23:21:12.960736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.960766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.960789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.960819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.960842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.960871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.960893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.960923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.960945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.960975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.960996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.671 [2024-07-22 23:21:12.961918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.671 [2024-07-22 23:21:12.961940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.961969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.961996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.962050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.962101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.962152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.962203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.962255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.962306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.962976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.962997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.672 [2024-07-22 23:21:12.963642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.963694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.963744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.963796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.963847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.963898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.672 [2024-07-22 23:21:12.963954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.672 [2024-07-22 23:21:12.963985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.673 [2024-07-22 23:21:12.964111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.964529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.964552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.965641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.965678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.965717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.965741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.965771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.965794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.965823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.965845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.965874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.965895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.965925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.965947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.965976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.965997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.673 [2024-07-22 23:21:12.966734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.673 [2024-07-22 23:21:12.966764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.966785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.966814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.966836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.966865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.966886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.966916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.966938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.966973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.966995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.967960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.967982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.968389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.968411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.969142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.969172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.969209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.969233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.969263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.674 [2024-07-22 23:21:12.969285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.969324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.969349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.969378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.969401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.969430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.674 [2024-07-22 23:21:12.969452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.674 [2024-07-22 23:21:12.969481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.969951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.969973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.675 [2024-07-22 23:21:12.970880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.970937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.970968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.970990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.675 [2024-07-22 23:21:12.971492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.675 [2024-07-22 23:21:12.971514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.971970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.971991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.972043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.972094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.972146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.972198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.676 [2024-07-22 23:21:12.972692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.972959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.972987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.973009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.973039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.973062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.676 [2024-07-22 23:21:12.974544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.676 [2024-07-22 23:21:12.974572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.974973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.974995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.677 [2024-07-22 23:21:12.975529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.975976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.975997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.677 [2024-07-22 23:21:12.976559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.677 [2024-07-22 23:21:12.976580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.976610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.976632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.976661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.976683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.976712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.976734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.976763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.976785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.976814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.976836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.976865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.976887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.977654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.977685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.977721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.977745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.977774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.977797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.977827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.678 [2024-07-22 23:21:12.977848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.977885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.977909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.977939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.977961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.977990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.678 [2024-07-22 23:21:12.978962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.678 [2024-07-22 23:21:12.978992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.979442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.979951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.979973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.679 [2024-07-22 23:21:12.980774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.980826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.980877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.980928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.980958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.980980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.679 [2024-07-22 23:21:12.981009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.679 [2024-07-22 23:21:12.981031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.680 [2024-07-22 23:21:12.981242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.981540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.981562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.982623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.982683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.982737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.982794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.982848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.982899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.982950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.982980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.983972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.680 [2024-07-22 23:21:12.983994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.680 [2024-07-22 23:21:12.984023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.984045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.984964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.984986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.985015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.985037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.985066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.985087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.985124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.985146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.985176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.985198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.985227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.985249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.985278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.985300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.985342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.985371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.986127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.986187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.986239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.986290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.681 [2024-07-22 23:21:12.986359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.681 [2024-07-22 23:21:12.986757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.681 [2024-07-22 23:21:12.986780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.986809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.986831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.986860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.986882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.986911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.986933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.986961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.986983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.682 [2024-07-22 23:21:12.987923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.987953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.987974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.682 [2024-07-22 23:21:12.988431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.682 [2024-07-22 23:21:12.988453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.988954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.988976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.989026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.989077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.989128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.989179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.989230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.683 [2024-07-22 23:21:12.989725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.989966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.989989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.683 [2024-07-22 23:21:12.991513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.683 [2024-07-22 23:21:12.991542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.991961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.991983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.684 [2024-07-22 23:21:12.992566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.992959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.992981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.684 [2024-07-22 23:21:12.993533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.684 [2024-07-22 23:21:12.993554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.993589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.993612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.993641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.993663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.993710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.993736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.993766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.993789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.993818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.993840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.994612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.994679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.994733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.994785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.994837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.685 [2024-07-22 23:21:12.994888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.994940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.994976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.995970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.995999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.996021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.996051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.996073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.996104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.996126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.996155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.996177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.996206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.996228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.996257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.996284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.996327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.685 [2024-07-22 23:21:12.996353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.685 [2024-07-22 23:21:12.996383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.996405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.996458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.996509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.996954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.996977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.997841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.997894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.997945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.997974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.997996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.998048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.998099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.998150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.998201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.998256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.686 [2024-07-22 23:21:12.998318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.998373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.686 [2024-07-22 23:21:12.998401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.686 [2024-07-22 23:21:12.998423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:12.998453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:12.998475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:12.998505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:12.998528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.010961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.010998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.011944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.011981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.012003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.012041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.012063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.687 [2024-07-22 23:21:13.012100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.687 [2024-07-22 23:21:13.012122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.012960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.012988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:13.013746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:13.013775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.546682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.688 [2024-07-22 23:21:36.546760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.546836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.688 [2024-07-22 23:21:36.546878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.546973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.688 [2024-07-22 23:21:36.547398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.688 [2024-07-22 23:21:36.547451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.688 [2024-07-22 23:21:36.547560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.688 [2024-07-22 23:21:36.547652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.688 [2024-07-22 23:21:36.547676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.547706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.547730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.547761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.547784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.547814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.547837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.547868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.547891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.547922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.547944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.547974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.547998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.548028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.689 [2024-07-22 23:21:36.548051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.548082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.548105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.548136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.689 [2024-07-22 23:21:36.548159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.548190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.689 [2024-07-22 23:21:36.548214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.550666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.550703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.550749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.550776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.550807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.550831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.550862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.550884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.550914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.550938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.550968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.550992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.689 [2024-07-22 23:21:36.551099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.689 [2024-07-22 23:21:36.551153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.689 [2024-07-22 23:21:36.551260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.689 [2024-07-22 23:21:36.551707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.551953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.551975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.552006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.552029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.552059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.552087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.552120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.552142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.552173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.689 [2024-07-22 23:21:36.552196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.689 [2024-07-22 23:21:36.552227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.690 [2024-07-22 23:21:36.552250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.552280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.690 [2024-07-22 23:21:36.552303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.552345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.552369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.552400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.552423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.552453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.552476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.552507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.552531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.553097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.553130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.553167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.553193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.553224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.553247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.690 [2024-07-22 23:21:36.553278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.690 [2024-07-22 23:21:36.553316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.690 Received shutdown signal, test time was about 50.321695 seconds 00:41:04.690 00:41:04.690 Latency(us) 00:41:04.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:04.690 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:04.690 Verification LBA range: start 0x0 length 0x4000 00:41:04.690 Nvme0n1 : 50.32 6265.94 24.48 0.00 0.00 20391.44 1334.99 5095302.64 00:41:04.690 =================================================================================================================== 00:41:04.690 Total : 6265.94 24.48 0.00 0.00 20391.44 1334.99 5095302.64 00:41:04.690 23:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:05.260 rmmod nvme_tcp 00:41:05.260 rmmod nvme_fabrics 00:41:05.260 rmmod nvme_keyring 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1029762 ']' 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1029762 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1029762 ']' 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1029762 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1029762 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1029762' 00:41:05.260 killing process with pid 1029762 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1029762 00:41:05.260 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1029762 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:05.519 23:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:08.060 00:41:08.060 real 1m3.894s 00:41:08.060 user 3m20.030s 00:41:08.060 sys 0m17.878s 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:41:08.060 ************************************ 00:41:08.060 END TEST nvmf_host_multipath_status 00:41:08.060 ************************************ 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.060 ************************************ 00:41:08.060 START TEST nvmf_discovery_remove_ifc 00:41:08.060 ************************************ 00:41:08.060 23:21:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:08.060 * Looking for test storage... 00:41:08.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:08.060 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:41:08.061 23:21:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:11.361 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:11.361 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:11.361 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:11.362 Found net devices under 0000:84:00.0: cvl_0_0 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:11.362 Found net devices under 0000:84:00.1: cvl_0_1 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:11.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:11.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:41:11.362 00:41:11.362 --- 10.0.0.2 ping statistics --- 00:41:11.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.362 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:11.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:11.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:41:11.362 00:41:11.362 --- 10.0.0.1 ping statistics --- 00:41:11.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.362 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1038985 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1038985 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1038985 ']' 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:11.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:11.362 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.362 [2024-07-22 23:21:47.530923] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:41:11.362 [2024-07-22 23:21:47.531091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:11.362 EAL: No free 2048 kB hugepages reported on node 1 00:41:11.362 [2024-07-22 23:21:47.660336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.621 [2024-07-22 23:21:47.769870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:11.621 [2024-07-22 23:21:47.769940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:11.621 [2024-07-22 23:21:47.769959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:11.621 [2024-07-22 23:21:47.769976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:11.621 [2024-07-22 23:21:47.769990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:11.621 [2024-07-22 23:21:47.770035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:11.621 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:11.621 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:41:11.621 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:11.622 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:11.622 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.881 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:11.881 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:41:11.881 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:11.881 23:21:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.881 [2024-07-22 23:21:47.952589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:11.881 [2024-07-22 23:21:47.960807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:41:11.881 null0 00:41:11.881 [2024-07-22 23:21:47.992752] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1039062 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1039062 /tmp/host.sock 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1039062 ']' 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:41:11.881 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:11.881 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.881 [2024-07-22 23:21:48.096811] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:41:11.881 [2024-07-22 23:21:48.096999] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039062 ] 00:41:11.881 EAL: No free 2048 kB hugepages reported on node 1 00:41:12.141 [2024-07-22 23:21:48.210501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.141 [2024-07-22 23:21:48.321516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:12.141 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:12.400 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:12.400 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:41:12.400 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:12.400 23:21:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:13.339 [2024-07-22 23:21:49.561272] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:41:13.339 [2024-07-22 23:21:49.561316] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:41:13.339 [2024-07-22 23:21:49.561346] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:41:13.599 [2024-07-22 23:21:49.688860] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:41:13.859 [2024-07-22 23:21:49.913928] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:13.859 [2024-07-22 23:21:49.914010] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:13.859 [2024-07-22 23:21:49.914067] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:13.859 [2024-07-22 23:21:49.914098] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:41:13.859 [2024-07-22 23:21:49.914130] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:41:13.859 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.859 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:13.860 [2024-07-22 23:21:49.919824] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1be4650 was disconnected and freed. delete nvme_qpair. 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:41:13.860 23:21:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:13.860 23:21:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:14.798 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:14.798 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:14.798 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:14.798 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:14.798 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:14.798 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:14.798 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:15.058 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.058 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:15.058 23:21:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:15.995 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:15.995 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:15.995 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:15.995 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.996 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:15.996 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.996 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:15.996 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.996 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:15.996 23:21:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:17.377 23:21:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:18.315 23:21:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:19.253 [2024-07-22 23:21:55.354201] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:41:19.253 [2024-07-22 23:21:55.354288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.253 [2024-07-22 23:21:55.354326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.253 [2024-07-22 23:21:55.354352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.253 [2024-07-22 23:21:55.354373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.253 [2024-07-22 23:21:55.354401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.253 [2024-07-22 23:21:55.354420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.253 [2024-07-22 23:21:55.354439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.253 [2024-07-22 23:21:55.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.253 [2024-07-22 23:21:55.354476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.253 [2024-07-22 23:21:55.354494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.253 [2024-07-22 23:21:55.354512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baaeb0 is same with the state(5) to be set 00:41:19.253 [2024-07-22 23:21:55.364230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baaeb0 (9): Bad file descriptor 00:41:19.253 [2024-07-22 23:21:55.374290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:19.253 23:21:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:19.253 23:21:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:19.253 23:21:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:19.253 23:21:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:19.253 23:21:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:19.253 23:21:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:19.253 23:21:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:20.192 [2024-07-22 23:21:56.431529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:41:20.192 [2024-07-22 23:21:56.431593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baaeb0 with addr=10.0.0.2, port=4420 00:41:20.192 [2024-07-22 23:21:56.431622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baaeb0 is same with the state(5) to be set 00:41:20.192 [2024-07-22 23:21:56.431666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baaeb0 (9): Bad file descriptor 00:41:20.192 [2024-07-22 23:21:56.432215] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:41:20.192 [2024-07-22 23:21:56.432258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:20.192 [2024-07-22 23:21:56.432279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:20.192 [2024-07-22 23:21:56.432300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:20.192 [2024-07-22 23:21:56.432354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:20.192 [2024-07-22 23:21:56.432379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:20.192 23:21:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:20.192 23:21:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:20.192 23:21:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:21.129 [2024-07-22 23:21:57.434881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:21.129 [2024-07-22 23:21:57.434919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:21.129 [2024-07-22 23:21:57.434947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:21.129 [2024-07-22 23:21:57.434965] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:41:21.129 [2024-07-22 23:21:57.434991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:21.129 [2024-07-22 23:21:57.435035] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:41:21.129 [2024-07-22 23:21:57.435080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:21.129 [2024-07-22 23:21:57.435107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:21.129 [2024-07-22 23:21:57.435129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:21.129 [2024-07-22 23:21:57.435147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:21.129 [2024-07-22 23:21:57.435166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:21.129 [2024-07-22 23:21:57.435183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:21.129 [2024-07-22 23:21:57.435202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:21.129 [2024-07-22 23:21:57.435220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:21.129 [2024-07-22 23:21:57.435238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:21.129 [2024-07-22 23:21:57.435256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:21.129 [2024-07-22 23:21:57.435273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:41:21.129 [2024-07-22 23:21:57.435397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baa350 (9): Bad file descriptor 00:41:21.129 [2024-07-22 23:21:57.436434] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:41:21.129 [2024-07-22 23:21:57.436464] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:21.389 23:21:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:22.772 23:21:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:23.374 [2024-07-22 23:21:59.446984] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:41:23.374 [2024-07-22 23:21:59.447024] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:41:23.374 [2024-07-22 23:21:59.447054] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:41:23.374 [2024-07-22 23:21:59.534353] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:23.634 [2024-07-22 23:21:59.720401] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:23.634 [2024-07-22 23:21:59.720467] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:23.634 [2024-07-22 23:21:59.720511] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:23.634 [2024-07-22 23:21:59.720541] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:41:23.634 [2024-07-22 23:21:59.720570] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:23.634 23:21:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:23.634 [2024-07-22 23:21:59.767118] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b98ab0 was disconnected and freed. delete nvme_qpair. 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1039062 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1039062 ']' 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1039062 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:24.572 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1039062 00:41:24.832 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:24.832 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:24.832 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1039062' 00:41:24.832 killing process with pid 1039062 00:41:24.832 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1039062 00:41:24.832 23:22:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1039062 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:25.093 rmmod nvme_tcp 00:41:25.093 rmmod nvme_fabrics 00:41:25.093 rmmod nvme_keyring 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1038985 ']' 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1038985 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1038985 ']' 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1038985 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1038985 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1038985' 00:41:25.093 killing process with pid 1038985 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1038985 00:41:25.093 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1038985 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:25.354 23:22:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:27.896 00:41:27.896 real 0m19.702s 00:41:27.896 user 0m27.295s 00:41:27.896 sys 0m4.479s 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:27.896 ************************************ 00:41:27.896 END TEST nvmf_discovery_remove_ifc 00:41:27.896 ************************************ 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.896 ************************************ 00:41:27.896 START TEST nvmf_identify_kernel_target 00:41:27.896 ************************************ 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:27.896 * Looking for test storage... 00:41:27.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:41:27.896 23:22:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:31.196 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:31.197 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:31.197 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:31.197 Found net devices under 0000:84:00.0: cvl_0_0 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:31.197 Found net devices under 0000:84:00.1: cvl_0_1 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:31.197 23:22:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:31.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:31.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:41:31.197 00:41:31.197 --- 10.0.0.2 ping statistics --- 00:41:31.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.197 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:31.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:31.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:41:31.197 00:41:31.197 --- 10.0.0.1 ping statistics --- 00:41:31.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.197 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:31.197 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:31.198 23:22:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:32.582 Waiting for block devices as requested 00:41:32.582 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:41:32.840 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:32.840 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:33.098 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:33.098 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:33.098 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:33.358 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:33.358 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:33.358 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:33.617 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:33.617 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:33.617 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:33.878 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:33.878 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:33.878 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:34.137 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:34.137 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:34.137 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:34.397 No valid GPT data, bailing 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:41:34.397 00:41:34.397 Discovery Log Number of Records 2, Generation counter 2 00:41:34.397 =====Discovery Log Entry 0====== 00:41:34.397 trtype: tcp 00:41:34.397 adrfam: ipv4 00:41:34.397 subtype: current discovery subsystem 00:41:34.397 treq: not specified, sq flow control disable supported 00:41:34.397 portid: 1 00:41:34.397 trsvcid: 4420 00:41:34.397 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:34.397 traddr: 10.0.0.1 00:41:34.397 eflags: none 00:41:34.397 sectype: none 00:41:34.397 =====Discovery Log Entry 1====== 00:41:34.397 trtype: tcp 00:41:34.397 adrfam: ipv4 00:41:34.397 subtype: nvme subsystem 00:41:34.397 treq: not specified, sq flow control disable supported 00:41:34.397 portid: 1 00:41:34.397 trsvcid: 4420 00:41:34.397 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:34.397 traddr: 10.0.0.1 00:41:34.397 eflags: none 00:41:34.397 sectype: none 00:41:34.397 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:41:34.397 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:41:34.397 EAL: No free 2048 kB hugepages reported on node 1 00:41:34.658 ===================================================== 00:41:34.658 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:41:34.658 ===================================================== 00:41:34.658 Controller Capabilities/Features 00:41:34.658 ================================ 00:41:34.658 Vendor ID: 0000 00:41:34.658 Subsystem Vendor ID: 0000 00:41:34.658 Serial Number: 848677e597ea29e7f5a1 00:41:34.658 Model Number: Linux 00:41:34.658 Firmware Version: 6.7.0-68 00:41:34.658 Recommended Arb Burst: 0 00:41:34.658 IEEE OUI Identifier: 00 00 00 00:41:34.658 Multi-path I/O 00:41:34.658 May have multiple subsystem ports: No 00:41:34.658 May have multiple controllers: No 00:41:34.658 Associated with SR-IOV VF: No 00:41:34.658 Max Data Transfer Size: Unlimited 00:41:34.658 Max Number of Namespaces: 0 00:41:34.658 Max Number of I/O Queues: 1024 00:41:34.658 NVMe Specification Version (VS): 1.3 00:41:34.658 NVMe Specification Version (Identify): 1.3 00:41:34.658 Maximum Queue Entries: 1024 00:41:34.658 Contiguous Queues Required: No 00:41:34.658 Arbitration Mechanisms Supported 00:41:34.658 Weighted Round Robin: Not Supported 00:41:34.658 Vendor Specific: Not Supported 00:41:34.658 Reset Timeout: 7500 ms 00:41:34.658 Doorbell Stride: 4 bytes 00:41:34.658 NVM Subsystem Reset: Not Supported 00:41:34.658 Command Sets Supported 00:41:34.658 NVM Command Set: Supported 00:41:34.658 Boot Partition: Not Supported 00:41:34.658 Memory Page Size Minimum: 4096 bytes 00:41:34.658 Memory Page Size Maximum: 4096 bytes 00:41:34.658 Persistent Memory Region: Not Supported 00:41:34.658 Optional Asynchronous Events Supported 00:41:34.658 Namespace Attribute Notices: Not Supported 00:41:34.658 Firmware Activation Notices: Not Supported 00:41:34.658 ANA Change Notices: Not Supported 00:41:34.658 PLE Aggregate Log Change Notices: Not Supported 00:41:34.658 LBA Status Info Alert Notices: Not Supported 00:41:34.658 EGE Aggregate Log Change Notices: Not Supported 00:41:34.658 Normal NVM Subsystem Shutdown event: Not Supported 00:41:34.658 Zone Descriptor Change Notices: Not Supported 00:41:34.658 Discovery Log Change Notices: Supported 00:41:34.658 Controller Attributes 00:41:34.658 128-bit Host Identifier: Not Supported 00:41:34.658 Non-Operational Permissive Mode: Not Supported 00:41:34.658 NVM Sets: Not Supported 00:41:34.658 Read Recovery Levels: Not Supported 00:41:34.658 Endurance Groups: Not Supported 00:41:34.658 Predictable Latency Mode: Not Supported 00:41:34.658 Traffic Based Keep ALive: Not Supported 00:41:34.658 Namespace Granularity: Not Supported 00:41:34.658 SQ Associations: Not Supported 00:41:34.658 UUID List: Not Supported 00:41:34.658 Multi-Domain Subsystem: Not Supported 00:41:34.658 Fixed Capacity Management: Not Supported 00:41:34.658 Variable Capacity Management: Not Supported 00:41:34.658 Delete Endurance Group: Not Supported 00:41:34.658 Delete NVM Set: Not Supported 00:41:34.658 Extended LBA Formats Supported: Not Supported 00:41:34.658 Flexible Data Placement Supported: Not Supported 00:41:34.658 00:41:34.658 Controller Memory Buffer Support 00:41:34.658 ================================ 00:41:34.658 Supported: No 00:41:34.658 00:41:34.658 Persistent Memory Region Support 00:41:34.658 ================================ 00:41:34.658 Supported: No 00:41:34.658 00:41:34.658 Admin Command Set Attributes 00:41:34.658 ============================ 00:41:34.658 Security Send/Receive: Not Supported 00:41:34.658 Format NVM: Not Supported 00:41:34.658 Firmware Activate/Download: Not Supported 00:41:34.658 Namespace Management: Not Supported 00:41:34.658 Device Self-Test: Not Supported 00:41:34.658 Directives: Not Supported 00:41:34.658 NVMe-MI: Not Supported 00:41:34.658 Virtualization Management: Not Supported 00:41:34.658 Doorbell Buffer Config: Not Supported 00:41:34.658 Get LBA Status Capability: Not Supported 00:41:34.658 Command & Feature Lockdown Capability: Not Supported 00:41:34.658 Abort Command Limit: 1 00:41:34.658 Async Event Request Limit: 1 00:41:34.658 Number of Firmware Slots: N/A 00:41:34.658 Firmware Slot 1 Read-Only: N/A 00:41:34.658 Firmware Activation Without Reset: N/A 00:41:34.658 Multiple Update Detection Support: N/A 00:41:34.658 Firmware Update Granularity: No Information Provided 00:41:34.658 Per-Namespace SMART Log: No 00:41:34.658 Asymmetric Namespace Access Log Page: Not Supported 00:41:34.658 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:41:34.658 Command Effects Log Page: Not Supported 00:41:34.658 Get Log Page Extended Data: Supported 00:41:34.658 Telemetry Log Pages: Not Supported 00:41:34.658 Persistent Event Log Pages: Not Supported 00:41:34.658 Supported Log Pages Log Page: May Support 00:41:34.658 Commands Supported & Effects Log Page: Not Supported 00:41:34.658 Feature Identifiers & Effects Log Page:May Support 00:41:34.658 NVMe-MI Commands & Effects Log Page: May Support 00:41:34.658 Data Area 4 for Telemetry Log: Not Supported 00:41:34.658 Error Log Page Entries Supported: 1 00:41:34.658 Keep Alive: Not Supported 00:41:34.658 00:41:34.658 NVM Command Set Attributes 00:41:34.658 ========================== 00:41:34.658 Submission Queue Entry Size 00:41:34.658 Max: 1 00:41:34.658 Min: 1 00:41:34.658 Completion Queue Entry Size 00:41:34.658 Max: 1 00:41:34.658 Min: 1 00:41:34.658 Number of Namespaces: 0 00:41:34.658 Compare Command: Not Supported 00:41:34.658 Write Uncorrectable Command: Not Supported 00:41:34.658 Dataset Management Command: Not Supported 00:41:34.658 Write Zeroes Command: Not Supported 00:41:34.658 Set Features Save Field: Not Supported 00:41:34.658 Reservations: Not Supported 00:41:34.658 Timestamp: Not Supported 00:41:34.658 Copy: Not Supported 00:41:34.658 Volatile Write Cache: Not Present 00:41:34.658 Atomic Write Unit (Normal): 1 00:41:34.658 Atomic Write Unit (PFail): 1 00:41:34.658 Atomic Compare & Write Unit: 1 00:41:34.658 Fused Compare & Write: Not Supported 00:41:34.658 Scatter-Gather List 00:41:34.658 SGL Command Set: Supported 00:41:34.658 SGL Keyed: Not Supported 00:41:34.658 SGL Bit Bucket Descriptor: Not Supported 00:41:34.658 SGL Metadata Pointer: Not Supported 00:41:34.658 Oversized SGL: Not Supported 00:41:34.658 SGL Metadata Address: Not Supported 00:41:34.658 SGL Offset: Supported 00:41:34.658 Transport SGL Data Block: Not Supported 00:41:34.658 Replay Protected Memory Block: Not Supported 00:41:34.658 00:41:34.658 Firmware Slot Information 00:41:34.658 ========================= 00:41:34.659 Active slot: 0 00:41:34.659 00:41:34.659 00:41:34.659 Error Log 00:41:34.659 ========= 00:41:34.659 00:41:34.659 Active Namespaces 00:41:34.659 ================= 00:41:34.659 Discovery Log Page 00:41:34.659 ================== 00:41:34.659 Generation Counter: 2 00:41:34.659 Number of Records: 2 00:41:34.659 Record Format: 0 00:41:34.659 00:41:34.659 Discovery Log Entry 0 00:41:34.659 ---------------------- 00:41:34.659 Transport Type: 3 (TCP) 00:41:34.659 Address Family: 1 (IPv4) 00:41:34.659 Subsystem Type: 3 (Current Discovery Subsystem) 00:41:34.659 Entry Flags: 00:41:34.659 Duplicate Returned Information: 0 00:41:34.659 Explicit Persistent Connection Support for Discovery: 0 00:41:34.659 Transport Requirements: 00:41:34.659 Secure Channel: Not Specified 00:41:34.659 Port ID: 1 (0x0001) 00:41:34.659 Controller ID: 65535 (0xffff) 00:41:34.659 Admin Max SQ Size: 32 00:41:34.659 Transport Service Identifier: 4420 00:41:34.659 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:41:34.659 Transport Address: 10.0.0.1 00:41:34.659 Discovery Log Entry 1 00:41:34.659 ---------------------- 00:41:34.659 Transport Type: 3 (TCP) 00:41:34.659 Address Family: 1 (IPv4) 00:41:34.659 Subsystem Type: 2 (NVM Subsystem) 00:41:34.659 Entry Flags: 00:41:34.659 Duplicate Returned Information: 0 00:41:34.659 Explicit Persistent Connection Support for Discovery: 0 00:41:34.659 Transport Requirements: 00:41:34.659 Secure Channel: Not Specified 00:41:34.659 Port ID: 1 (0x0001) 00:41:34.659 Controller ID: 65535 (0xffff) 00:41:34.659 Admin Max SQ Size: 32 00:41:34.659 Transport Service Identifier: 4420 00:41:34.659 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:41:34.659 Transport Address: 10.0.0.1 00:41:34.659 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:34.659 EAL: No free 2048 kB hugepages reported on node 1 00:41:34.659 get_feature(0x01) failed 00:41:34.659 get_feature(0x02) failed 00:41:34.659 get_feature(0x04) failed 00:41:34.659 ===================================================== 00:41:34.659 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:34.659 ===================================================== 00:41:34.659 Controller Capabilities/Features 00:41:34.659 ================================ 00:41:34.659 Vendor ID: 0000 00:41:34.659 Subsystem Vendor ID: 0000 00:41:34.659 Serial Number: 566ac63a7048fc75265d 00:41:34.659 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:41:34.659 Firmware Version: 6.7.0-68 00:41:34.659 Recommended Arb Burst: 6 00:41:34.659 IEEE OUI Identifier: 00 00 00 00:41:34.659 Multi-path I/O 00:41:34.659 May have multiple subsystem ports: Yes 00:41:34.659 May have multiple controllers: Yes 00:41:34.659 Associated with SR-IOV VF: No 00:41:34.659 Max Data Transfer Size: Unlimited 00:41:34.659 Max Number of Namespaces: 1024 00:41:34.659 Max Number of I/O Queues: 128 00:41:34.659 NVMe Specification Version (VS): 1.3 00:41:34.659 NVMe Specification Version (Identify): 1.3 00:41:34.659 Maximum Queue Entries: 1024 00:41:34.659 Contiguous Queues Required: No 00:41:34.659 Arbitration Mechanisms Supported 00:41:34.659 Weighted Round Robin: Not Supported 00:41:34.659 Vendor Specific: Not Supported 00:41:34.659 Reset Timeout: 7500 ms 00:41:34.659 Doorbell Stride: 4 bytes 00:41:34.659 NVM Subsystem Reset: Not Supported 00:41:34.659 Command Sets Supported 00:41:34.659 NVM Command Set: Supported 00:41:34.659 Boot Partition: Not Supported 00:41:34.659 Memory Page Size Minimum: 4096 bytes 00:41:34.659 Memory Page Size Maximum: 4096 bytes 00:41:34.659 Persistent Memory Region: Not Supported 00:41:34.659 Optional Asynchronous Events Supported 00:41:34.659 Namespace Attribute Notices: Supported 00:41:34.659 Firmware Activation Notices: Not Supported 00:41:34.659 ANA Change Notices: Supported 00:41:34.659 PLE Aggregate Log Change Notices: Not Supported 00:41:34.659 LBA Status Info Alert Notices: Not Supported 00:41:34.659 EGE Aggregate Log Change Notices: Not Supported 00:41:34.659 Normal NVM Subsystem Shutdown event: Not Supported 00:41:34.659 Zone Descriptor Change Notices: Not Supported 00:41:34.659 Discovery Log Change Notices: Not Supported 00:41:34.659 Controller Attributes 00:41:34.659 128-bit Host Identifier: Supported 00:41:34.659 Non-Operational Permissive Mode: Not Supported 00:41:34.659 NVM Sets: Not Supported 00:41:34.659 Read Recovery Levels: Not Supported 00:41:34.659 Endurance Groups: Not Supported 00:41:34.659 Predictable Latency Mode: Not Supported 00:41:34.659 Traffic Based Keep ALive: Supported 00:41:34.659 Namespace Granularity: Not Supported 00:41:34.659 SQ Associations: Not Supported 00:41:34.659 UUID List: Not Supported 00:41:34.659 Multi-Domain Subsystem: Not Supported 00:41:34.659 Fixed Capacity Management: Not Supported 00:41:34.659 Variable Capacity Management: Not Supported 00:41:34.659 Delete Endurance Group: Not Supported 00:41:34.659 Delete NVM Set: Not Supported 00:41:34.659 Extended LBA Formats Supported: Not Supported 00:41:34.659 Flexible Data Placement Supported: Not Supported 00:41:34.659 00:41:34.659 Controller Memory Buffer Support 00:41:34.659 ================================ 00:41:34.659 Supported: No 00:41:34.659 00:41:34.659 Persistent Memory Region Support 00:41:34.659 ================================ 00:41:34.659 Supported: No 00:41:34.659 00:41:34.659 Admin Command Set Attributes 00:41:34.659 ============================ 00:41:34.659 Security Send/Receive: Not Supported 00:41:34.659 Format NVM: Not Supported 00:41:34.659 Firmware Activate/Download: Not Supported 00:41:34.659 Namespace Management: Not Supported 00:41:34.659 Device Self-Test: Not Supported 00:41:34.659 Directives: Not Supported 00:41:34.659 NVMe-MI: Not Supported 00:41:34.659 Virtualization Management: Not Supported 00:41:34.659 Doorbell Buffer Config: Not Supported 00:41:34.659 Get LBA Status Capability: Not Supported 00:41:34.659 Command & Feature Lockdown Capability: Not Supported 00:41:34.659 Abort Command Limit: 4 00:41:34.659 Async Event Request Limit: 4 00:41:34.659 Number of Firmware Slots: N/A 00:41:34.659 Firmware Slot 1 Read-Only: N/A 00:41:34.659 Firmware Activation Without Reset: N/A 00:41:34.659 Multiple Update Detection Support: N/A 00:41:34.659 Firmware Update Granularity: No Information Provided 00:41:34.659 Per-Namespace SMART Log: Yes 00:41:34.659 Asymmetric Namespace Access Log Page: Supported 00:41:34.659 ANA Transition Time : 10 sec 00:41:34.659 00:41:34.659 Asymmetric Namespace Access Capabilities 00:41:34.659 ANA Optimized State : Supported 00:41:34.659 ANA Non-Optimized State : Supported 00:41:34.659 ANA Inaccessible State : Supported 00:41:34.659 ANA Persistent Loss State : Supported 00:41:34.659 ANA Change State : Supported 00:41:34.659 ANAGRPID is not changed : No 00:41:34.659 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:41:34.659 00:41:34.659 ANA Group Identifier Maximum : 128 00:41:34.659 Number of ANA Group Identifiers : 128 00:41:34.659 Max Number of Allowed Namespaces : 1024 00:41:34.659 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:41:34.659 Command Effects Log Page: Supported 00:41:34.659 Get Log Page Extended Data: Supported 00:41:34.659 Telemetry Log Pages: Not Supported 00:41:34.659 Persistent Event Log Pages: Not Supported 00:41:34.659 Supported Log Pages Log Page: May Support 00:41:34.659 Commands Supported & Effects Log Page: Not Supported 00:41:34.659 Feature Identifiers & Effects Log Page:May Support 00:41:34.659 NVMe-MI Commands & Effects Log Page: May Support 00:41:34.659 Data Area 4 for Telemetry Log: Not Supported 00:41:34.659 Error Log Page Entries Supported: 128 00:41:34.659 Keep Alive: Supported 00:41:34.659 Keep Alive Granularity: 1000 ms 00:41:34.659 00:41:34.659 NVM Command Set Attributes 00:41:34.659 ========================== 00:41:34.659 Submission Queue Entry Size 00:41:34.659 Max: 64 00:41:34.659 Min: 64 00:41:34.659 Completion Queue Entry Size 00:41:34.659 Max: 16 00:41:34.659 Min: 16 00:41:34.659 Number of Namespaces: 1024 00:41:34.659 Compare Command: Not Supported 00:41:34.659 Write Uncorrectable Command: Not Supported 00:41:34.659 Dataset Management Command: Supported 00:41:34.659 Write Zeroes Command: Supported 00:41:34.659 Set Features Save Field: Not Supported 00:41:34.659 Reservations: Not Supported 00:41:34.659 Timestamp: Not Supported 00:41:34.659 Copy: Not Supported 00:41:34.659 Volatile Write Cache: Present 00:41:34.659 Atomic Write Unit (Normal): 1 00:41:34.659 Atomic Write Unit (PFail): 1 00:41:34.659 Atomic Compare & Write Unit: 1 00:41:34.659 Fused Compare & Write: Not Supported 00:41:34.660 Scatter-Gather List 00:41:34.660 SGL Command Set: Supported 00:41:34.660 SGL Keyed: Not Supported 00:41:34.660 SGL Bit Bucket Descriptor: Not Supported 00:41:34.660 SGL Metadata Pointer: Not Supported 00:41:34.660 Oversized SGL: Not Supported 00:41:34.660 SGL Metadata Address: Not Supported 00:41:34.660 SGL Offset: Supported 00:41:34.660 Transport SGL Data Block: Not Supported 00:41:34.660 Replay Protected Memory Block: Not Supported 00:41:34.660 00:41:34.660 Firmware Slot Information 00:41:34.660 ========================= 00:41:34.660 Active slot: 0 00:41:34.660 00:41:34.660 Asymmetric Namespace Access 00:41:34.660 =========================== 00:41:34.660 Change Count : 0 00:41:34.660 Number of ANA Group Descriptors : 1 00:41:34.660 ANA Group Descriptor : 0 00:41:34.660 ANA Group ID : 1 00:41:34.660 Number of NSID Values : 1 00:41:34.660 Change Count : 0 00:41:34.660 ANA State : 1 00:41:34.660 Namespace Identifier : 1 00:41:34.660 00:41:34.660 Commands Supported and Effects 00:41:34.660 ============================== 00:41:34.660 Admin Commands 00:41:34.660 -------------- 00:41:34.660 Get Log Page (02h): Supported 00:41:34.660 Identify (06h): Supported 00:41:34.660 Abort (08h): Supported 00:41:34.660 Set Features (09h): Supported 00:41:34.660 Get Features (0Ah): Supported 00:41:34.660 Asynchronous Event Request (0Ch): Supported 00:41:34.660 Keep Alive (18h): Supported 00:41:34.660 I/O Commands 00:41:34.660 ------------ 00:41:34.660 Flush (00h): Supported 00:41:34.660 Write (01h): Supported LBA-Change 00:41:34.660 Read (02h): Supported 00:41:34.660 Write Zeroes (08h): Supported LBA-Change 00:41:34.660 Dataset Management (09h): Supported 00:41:34.660 00:41:34.660 Error Log 00:41:34.660 ========= 00:41:34.660 Entry: 0 00:41:34.660 Error Count: 0x3 00:41:34.660 Submission Queue Id: 0x0 00:41:34.660 Command Id: 0x5 00:41:34.660 Phase Bit: 0 00:41:34.660 Status Code: 0x2 00:41:34.660 Status Code Type: 0x0 00:41:34.660 Do Not Retry: 1 00:41:34.660 Error Location: 0x28 00:41:34.660 LBA: 0x0 00:41:34.660 Namespace: 0x0 00:41:34.660 Vendor Log Page: 0x0 00:41:34.660 ----------- 00:41:34.660 Entry: 1 00:41:34.660 Error Count: 0x2 00:41:34.660 Submission Queue Id: 0x0 00:41:34.660 Command Id: 0x5 00:41:34.660 Phase Bit: 0 00:41:34.660 Status Code: 0x2 00:41:34.660 Status Code Type: 0x0 00:41:34.660 Do Not Retry: 1 00:41:34.660 Error Location: 0x28 00:41:34.660 LBA: 0x0 00:41:34.660 Namespace: 0x0 00:41:34.660 Vendor Log Page: 0x0 00:41:34.660 ----------- 00:41:34.660 Entry: 2 00:41:34.660 Error Count: 0x1 00:41:34.660 Submission Queue Id: 0x0 00:41:34.660 Command Id: 0x4 00:41:34.660 Phase Bit: 0 00:41:34.660 Status Code: 0x2 00:41:34.660 Status Code Type: 0x0 00:41:34.660 Do Not Retry: 1 00:41:34.660 Error Location: 0x28 00:41:34.660 LBA: 0x0 00:41:34.660 Namespace: 0x0 00:41:34.660 Vendor Log Page: 0x0 00:41:34.660 00:41:34.660 Number of Queues 00:41:34.660 ================ 00:41:34.660 Number of I/O Submission Queues: 128 00:41:34.660 Number of I/O Completion Queues: 128 00:41:34.660 00:41:34.660 ZNS Specific Controller Data 00:41:34.660 ============================ 00:41:34.660 Zone Append Size Limit: 0 00:41:34.660 00:41:34.660 00:41:34.660 Active Namespaces 00:41:34.660 ================= 00:41:34.660 get_feature(0x05) failed 00:41:34.660 Namespace ID:1 00:41:34.660 Command Set Identifier: NVM (00h) 00:41:34.660 Deallocate: Supported 00:41:34.660 Deallocated/Unwritten Error: Not Supported 00:41:34.660 Deallocated Read Value: Unknown 00:41:34.660 Deallocate in Write Zeroes: Not Supported 00:41:34.660 Deallocated Guard Field: 0xFFFF 00:41:34.660 Flush: Supported 00:41:34.660 Reservation: Not Supported 00:41:34.660 Namespace Sharing Capabilities: Multiple Controllers 00:41:34.660 Size (in LBAs): 1953525168 (931GiB) 00:41:34.660 Capacity (in LBAs): 1953525168 (931GiB) 00:41:34.660 Utilization (in LBAs): 1953525168 (931GiB) 00:41:34.660 UUID: 4d465de5-63cd-494c-b499-0c62b1d44891 00:41:34.660 Thin Provisioning: Not Supported 00:41:34.660 Per-NS Atomic Units: Yes 00:41:34.660 Atomic Boundary Size (Normal): 0 00:41:34.660 Atomic Boundary Size (PFail): 0 00:41:34.660 Atomic Boundary Offset: 0 00:41:34.660 NGUID/EUI64 Never Reused: No 00:41:34.660 ANA group ID: 1 00:41:34.660 Namespace Write Protected: No 00:41:34.660 Number of LBA Formats: 1 00:41:34.660 Current LBA Format: LBA Format #00 00:41:34.660 LBA Format #00: Data Size: 512 Metadata Size: 0 00:41:34.660 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:34.660 rmmod nvme_tcp 00:41:34.660 rmmod nvme_fabrics 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.660 23:22:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:41:37.201 23:22:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:39.111 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:39.111 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:39.111 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:39.111 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:39.111 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:39.111 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:39.111 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:39.111 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:39.111 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:39.680 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:41:39.941 00:41:39.941 real 0m12.485s 00:41:39.941 user 0m2.782s 00:41:39.941 sys 0m5.587s 00:41:39.941 23:22:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:39.941 23:22:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:41:39.941 ************************************ 00:41:39.941 END TEST nvmf_identify_kernel_target 00:41:39.941 ************************************ 00:41:39.941 23:22:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:41:39.941 23:22:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:39.941 23:22:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:39.941 23:22:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:39.941 23:22:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.202 ************************************ 00:41:40.202 START TEST nvmf_auth_host 00:41:40.202 ************************************ 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:40.202 * Looking for test storage... 00:41:40.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:41:40.202 23:22:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:43.522 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:43.522 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:43.523 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:43.523 Found net devices under 0000:84:00.0: cvl_0_0 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:43.523 Found net devices under 0000:84:00.1: cvl_0_1 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:43.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:43.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:41:43.523 00:41:43.523 --- 10.0.0.2 ping statistics --- 00:41:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.523 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:43.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:43.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:41:43.523 00:41:43.523 --- 10.0.0.1 ping statistics --- 00:41:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.523 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1046385 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1046385 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1046385 ']' 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:43.523 23:22:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=73c2a81c3e8c514239c0fb2c4d4f9d5c 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Hrt 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 73c2a81c3e8c514239c0fb2c4d4f9d5c 0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 73c2a81c3e8c514239c0fb2c4d4f9d5c 0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=73c2a81c3e8c514239c0fb2c4d4f9d5c 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Hrt 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Hrt 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Hrt 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=78c7b6e7096dd93a803cb982f8f800fb4e26d0790632871eface4895f9c02d7f 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uzJ 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 78c7b6e7096dd93a803cb982f8f800fb4e26d0790632871eface4895f9c02d7f 3 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 78c7b6e7096dd93a803cb982f8f800fb4e26d0790632871eface4895f9c02d7f 3 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=78c7b6e7096dd93a803cb982f8f800fb4e26d0790632871eface4895f9c02d7f 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uzJ 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uzJ 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uzJ 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1c2e59fcc513199ca28e145d0df0ddfb9fe781e1e3f061e2 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7rM 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1c2e59fcc513199ca28e145d0df0ddfb9fe781e1e3f061e2 0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1c2e59fcc513199ca28e145d0df0ddfb9fe781e1e3f061e2 0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1c2e59fcc513199ca28e145d0df0ddfb9fe781e1e3f061e2 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7rM 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7rM 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7rM 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7939c869768aaf3ca09ade93577994f5dedfb835a8f34498 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ho0 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7939c869768aaf3ca09ade93577994f5dedfb835a8f34498 2 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7939c869768aaf3ca09ade93577994f5dedfb835a8f34498 2 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7939c869768aaf3ca09ade93577994f5dedfb835a8f34498 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:41:44.092 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ho0 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ho0 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Ho0 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b7f1a613eb9dfed2001913a256b0b86f 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7o8 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b7f1a613eb9dfed2001913a256b0b86f 1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b7f1a613eb9dfed2001913a256b0b86f 1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b7f1a613eb9dfed2001913a256b0b86f 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7o8 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7o8 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7o8 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b800e7a2eed3915f88a6539faed3d597 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gvF 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b800e7a2eed3915f88a6539faed3d597 1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b800e7a2eed3915f88a6539faed3d597 1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b800e7a2eed3915f88a6539faed3d597 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gvF 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gvF 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gvF 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=adfc0e9a1c63d1683584f4bafc59d960a474bd16abb25719 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5oi 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key adfc0e9a1c63d1683584f4bafc59d960a474bd16abb25719 2 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 adfc0e9a1c63d1683584f4bafc59d960a474bd16abb25719 2 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=adfc0e9a1c63d1683584f4bafc59d960a474bd16abb25719 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5oi 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5oi 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5oi 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b50eecd4412c81714d3809a3b0f0f0bf 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.G2Z 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b50eecd4412c81714d3809a3b0f0f0bf 0 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b50eecd4412c81714d3809a3b0f0f0bf 0 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b50eecd4412c81714d3809a3b0f0f0bf 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:44.351 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.G2Z 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.G2Z 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.G2Z 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bc5e6a5330cab625079bfc58d2d4bf3e5f400c8157fa2ca28508d71e43a7d926 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bJm 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bc5e6a5330cab625079bfc58d2d4bf3e5f400c8157fa2ca28508d71e43a7d926 3 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bc5e6a5330cab625079bfc58d2d4bf3e5f400c8157fa2ca28508d71e43a7d926 3 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bc5e6a5330cab625079bfc58d2d4bf3e5f400c8157fa2ca28508d71e43a7d926 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bJm 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bJm 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bJm 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1046385 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1046385 ']' 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:44.609 23:22:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Hrt 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uzJ ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uzJ 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7rM 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Ho0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ho0 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7o8 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gvF ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gvF 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5oi 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.G2Z ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.G2Z 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bJm 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:45.176 23:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:47.082 Waiting for block devices as requested 00:41:47.082 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:41:47.341 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:47.341 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:47.341 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:47.600 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:47.600 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:47.600 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:47.860 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:47.860 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:47.860 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:48.120 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:48.120 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:48.120 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:48.380 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:48.380 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:48.380 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:48.640 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:49.209 No valid GPT data, bailing 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:49.209 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:41:49.470 00:41:49.470 Discovery Log Number of Records 2, Generation counter 2 00:41:49.470 =====Discovery Log Entry 0====== 00:41:49.470 trtype: tcp 00:41:49.470 adrfam: ipv4 00:41:49.470 subtype: current discovery subsystem 00:41:49.470 treq: not specified, sq flow control disable supported 00:41:49.470 portid: 1 00:41:49.470 trsvcid: 4420 00:41:49.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:49.470 traddr: 10.0.0.1 00:41:49.470 eflags: none 00:41:49.470 sectype: none 00:41:49.470 =====Discovery Log Entry 1====== 00:41:49.470 trtype: tcp 00:41:49.470 adrfam: ipv4 00:41:49.470 subtype: nvme subsystem 00:41:49.470 treq: not specified, sq flow control disable supported 00:41:49.470 portid: 1 00:41:49.470 trsvcid: 4420 00:41:49.470 subnqn: nqn.2024-02.io.spdk:cnode0 00:41:49.470 traddr: 10.0.0.1 00:41:49.470 eflags: none 00:41:49.470 sectype: none 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.470 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.731 nvme0n1 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.731 23:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.992 nvme0n1 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.992 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.252 nvme0n1 00:41:50.252 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.252 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.252 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.252 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.252 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.252 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.252 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:50.253 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:50.253 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.253 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.513 nvme0n1 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.513 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.772 23:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.772 nvme0n1 00:41:50.772 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.773 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.773 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.773 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.773 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.031 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.290 nvme0n1 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:51.290 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.291 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.550 nvme0n1 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:51.550 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.551 23:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.811 nvme0n1 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.811 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.071 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.331 nvme0n1 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.331 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.589 nvme0n1 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.589 23:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.846 nvme0n1 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.846 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.103 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.361 nvme0n1 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.361 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.362 23:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.931 nvme0n1 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.931 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.502 nvme0n1 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:54.502 23:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.072 nvme0n1 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.072 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.642 nvme0n1 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.642 23:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.581 nvme0n1 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:56.581 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:56.840 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:56.841 23:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.779 nvme0n1 00:41:57.779 23:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.779 23:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:57.779 23:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.779 23:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.779 23:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:57.779 23:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:57.779 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.780 23:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.162 nvme0n1 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:59.162 23:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.129 nvme0n1 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.129 23:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.512 nvme0n1 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.512 23:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.422 nvme0n1 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.422 23:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.333 nvme0n1 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.333 23:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.243 nvme0n1 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:07.243 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:07.503 23:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.412 nvme0n1 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:09.412 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:09.413 23:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.323 nvme0n1 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.323 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.324 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.584 nvme0n1 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.584 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.585 23:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.846 nvme0n1 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.846 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.108 nvme0n1 00:42:12.108 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.108 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:12.108 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.108 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.108 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:12.108 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.369 nvme0n1 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.369 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:12.628 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.629 nvme0n1 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.629 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.888 23:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.148 nvme0n1 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:13.148 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.149 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.410 nvme0n1 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.410 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.670 nvme0n1 00:42:13.670 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.670 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:13.670 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.670 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:13.670 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.670 23:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.931 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.191 nvme0n1 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:14.191 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.192 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.452 nvme0n1 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.452 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.713 23:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.283 nvme0n1 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:15.283 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.284 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.855 nvme0n1 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.855 23:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.424 nvme0n1 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.424 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.425 23:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.994 nvme0n1 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.994 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.564 nvme0n1 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:17.564 23:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.504 nvme0n1 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.504 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:18.767 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:18.768 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:18.768 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:18.768 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:18.768 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.768 23:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.708 nvme0n1 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:19.708 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.709 23:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.091 nvme0n1 00:42:21.091 23:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.091 23:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:21.091 23:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.091 23:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.091 23:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:21.091 23:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:21.091 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.092 23:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.030 nvme0n1 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.030 23:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.970 nvme0n1 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:22.970 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.971 23:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.880 nvme0n1 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:24.880 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:24.881 23:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.786 nvme0n1 00:42:26.786 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:26.786 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:26.786 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:26.786 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.786 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:26.786 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:27.046 23:23:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.965 nvme0n1 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.965 23:23:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.882 nvme0n1 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:30.882 23:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.800 nvme0n1 00:42:32.800 23:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.800 23:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:32.800 23:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.800 23:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.800 23:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:32.800 23:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.800 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.801 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.062 nvme0n1 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.063 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.325 nvme0n1 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.325 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.587 nvme0n1 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.587 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.849 23:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.111 nvme0n1 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.111 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.112 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.374 nvme0n1 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:34.374 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.375 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.636 nvme0n1 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:34.636 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.898 23:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.160 nvme0n1 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.160 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.422 nvme0n1 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.422 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.423 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.684 23:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.946 nvme0n1 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.947 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.208 nvme0n1 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.208 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.781 nvme0n1 00:42:36.781 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.781 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:36.782 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:36.782 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.782 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.782 23:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.782 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.353 nvme0n1 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:37.353 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.354 23:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.937 nvme0n1 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:37.937 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.938 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:38.515 nvme0n1 00:42:38.515 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:38.515 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:38.516 23:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:39.088 nvme0n1 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:39.088 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.089 23:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:40.034 nvme0n1 00:42:40.034 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.034 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:40.034 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:40.034 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:40.034 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:40.034 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:40.295 23:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:41.239 nvme0n1 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:41.239 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.240 23:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:42.626 nvme0n1 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:42.626 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:42.627 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:42.627 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.627 23:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:43.569 nvme0n1 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:43.569 23:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:44.512 nvme0n1 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNjMmE4MWMzZThjNTE0MjM5YzBmYjJjNGQ0ZjlkNWOAmeii: 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzhjN2I2ZTcwOTZkZDkzYTgwM2NiOTgyZjhmODAwZmI0ZTI2ZDA3OTA2MzI4NzFlZmFjZTQ4OTVmOWMwMmQ3ZjxL5YI=: 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.512 23:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:46.429 nvme0n1 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:46.429 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:46.430 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:46.430 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:46.430 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.430 23:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:48.340 nvme0n1 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMWE2MTNlYjlkZmVkMjAwMTkxM2EyNTZiMGI4NmbecGzT: 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgwMGU3YTJlZWQzOTE1Zjg4YTY1MzlmYWVkM2Q1OTfeKWke: 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.341 23:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:50.250 nvme0n1 00:42:50.250 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:50.250 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:50.250 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:50.250 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:50.250 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:50.250 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:50.510 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:50.510 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:50.510 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:50.510 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWRmYzBlOWExYzYzZDE2ODM1ODRmNGJhZmM1OWQ5NjBhNDc0YmQxNmFiYjI1NzE5qSsr6A==: 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: ]] 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjUwZWVjZDQ0MTJjODE3MTRkMzgwOWEzYjBmMGYwYmZo/Nt/: 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:50.511 23:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:52.421 nvme0n1 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmM1ZTZhNTMzMGNhYjYyNTA3OWJmYzU4ZDJkNGJmM2U1ZjQwMGM4MTU3ZmEyY2EyODUwOGQ3MWU0M2E3ZDkyNj9R8ng=: 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:52.421 23:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.332 nvme0n1 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWMyZTU5ZmNjNTEzMTk5Y2EyOGUxNDVkMGRmMGRkZmI5ZmU3ODFlMWUzZjA2MWUyn2xWCw==: 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzkzOWM4Njk3NjhhYWYzY2EwOWFkZTkzNTc3OTk0ZjVkZWRmYjgzNWE4ZjM0NDk4mc+z+A==: 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.332 request: 00:42:54.332 { 00:42:54.332 "name": "nvme0", 00:42:54.332 "trtype": "tcp", 00:42:54.332 "traddr": "10.0.0.1", 00:42:54.332 "adrfam": "ipv4", 00:42:54.332 "trsvcid": "4420", 00:42:54.332 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:42:54.332 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:42:54.332 "prchk_reftag": false, 00:42:54.332 "prchk_guard": false, 00:42:54.332 "hdgst": false, 00:42:54.332 "ddgst": false, 00:42:54.332 "method": "bdev_nvme_attach_controller", 00:42:54.332 "req_id": 1 00:42:54.332 } 00:42:54.332 Got JSON-RPC error response 00:42:54.332 response: 00:42:54.332 { 00:42:54.332 "code": -5, 00:42:54.332 "message": "Input/output error" 00:42:54.332 } 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:54.332 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.333 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.593 request: 00:42:54.593 { 00:42:54.593 "name": "nvme0", 00:42:54.593 "trtype": "tcp", 00:42:54.593 "traddr": "10.0.0.1", 00:42:54.593 "adrfam": "ipv4", 00:42:54.593 "trsvcid": "4420", 00:42:54.593 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:42:54.593 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:42:54.593 "prchk_reftag": false, 00:42:54.593 "prchk_guard": false, 00:42:54.593 "hdgst": false, 00:42:54.593 "ddgst": false, 00:42:54.593 "dhchap_key": "key2", 00:42:54.593 "method": "bdev_nvme_attach_controller", 00:42:54.593 "req_id": 1 00:42:54.593 } 00:42:54.593 Got JSON-RPC error response 00:42:54.593 response: 00:42:54.593 { 00:42:54.593 "code": -5, 00:42:54.593 "message": "Input/output error" 00:42:54.593 } 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.593 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:54.856 request: 00:42:54.856 { 00:42:54.856 "name": "nvme0", 00:42:54.856 "trtype": "tcp", 00:42:54.856 "traddr": "10.0.0.1", 00:42:54.856 "adrfam": "ipv4", 00:42:54.856 "trsvcid": "4420", 00:42:54.856 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:42:54.856 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:42:54.856 "prchk_reftag": false, 00:42:54.856 "prchk_guard": false, 00:42:54.856 "hdgst": false, 00:42:54.856 "ddgst": false, 00:42:54.856 "dhchap_key": "key1", 00:42:54.856 "dhchap_ctrlr_key": "ckey2", 00:42:54.856 "method": "bdev_nvme_attach_controller", 00:42:54.856 "req_id": 1 00:42:54.856 } 00:42:54.856 Got JSON-RPC error response 00:42:54.856 response: 00:42:54.856 { 00:42:54.856 "code": -5, 00:42:54.856 "message": "Input/output error" 00:42:54.856 } 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:54.856 rmmod nvme_tcp 00:42:54.856 rmmod nvme_fabrics 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1046385 ']' 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1046385 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1046385 ']' 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1046385 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:54.856 23:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1046385 00:42:54.856 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:54.856 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:54.856 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1046385' 00:42:54.856 killing process with pid 1046385 00:42:54.856 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1046385 00:42:54.856 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1046385 00:42:55.115 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:55.116 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:55.116 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:55.116 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:55.116 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:55.116 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:55.116 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:55.116 23:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:42:57.654 23:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:59.087 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:59.087 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:59.087 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:59.087 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:59.087 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:59.087 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:59.087 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:59.087 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:59.347 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:00.285 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:43:00.285 23:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Hrt /tmp/spdk.key-null.7rM /tmp/spdk.key-sha256.7o8 /tmp/spdk.key-sha384.5oi /tmp/spdk.key-sha512.bJm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:43:00.285 23:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:02.195 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:02.195 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:02.195 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:02.195 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:02.195 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:02.195 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:02.195 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:02.195 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:02.195 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:02.195 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:02.195 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:02.195 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:02.195 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:02.195 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:02.195 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:02.195 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:02.195 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:02.195 00:43:02.195 real 1m22.225s 00:43:02.195 user 1m20.842s 00:43:02.195 sys 0m9.607s 00:43:02.195 23:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:02.195 23:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:43:02.195 ************************************ 00:43:02.195 END TEST nvmf_auth_host 00:43:02.195 ************************************ 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:02.455 ************************************ 00:43:02.455 START TEST nvmf_digest 00:43:02.455 ************************************ 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:43:02.455 * Looking for test storage... 00:43:02.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:02.455 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:43:02.456 23:23:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:05.763 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:05.763 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:05.763 Found net devices under 0000:84:00.0: cvl_0_0 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:05.763 Found net devices under 0000:84:00.1: cvl_0_1 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:05.763 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:05.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:05.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:43:05.764 00:43:05.764 --- 10.0.0.2 ping statistics --- 00:43:05.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.764 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:05.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:05.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:43:05.764 00:43:05.764 --- 10.0.0.1 ping statistics --- 00:43:05.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.764 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:43:05.764 ************************************ 00:43:05.764 START TEST nvmf_digest_clean 00:43:05.764 ************************************ 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1059028 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1059028 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1059028 ']' 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:05.764 23:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:05.764 [2024-07-22 23:23:41.991166] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:05.764 [2024-07-22 23:23:41.991275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.764 EAL: No free 2048 kB hugepages reported on node 1 00:43:06.023 [2024-07-22 23:23:42.095472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:06.023 [2024-07-22 23:23:42.247926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:06.023 [2024-07-22 23:23:42.248033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:06.023 [2024-07-22 23:23:42.248070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:06.023 [2024-07-22 23:23:42.248100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:06.023 [2024-07-22 23:23:42.248126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:06.023 [2024-07-22 23:23:42.248201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:06.986 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:06.986 null0 00:43:06.986 [2024-07-22 23:23:43.283110] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:07.243 [2024-07-22 23:23:43.307428] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:43:07.243 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1059181 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1059181 /var/tmp/bperf.sock 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1059181 ']' 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:07.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:07.244 23:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:07.244 [2024-07-22 23:23:43.397677] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:07.244 [2024-07-22 23:23:43.397831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059181 ] 00:43:07.244 EAL: No free 2048 kB hugepages reported on node 1 00:43:07.244 [2024-07-22 23:23:43.503581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.502 [2024-07-22 23:23:43.609177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:08.439 23:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:08.439 23:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:43:08.439 23:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:43:08.439 23:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:43:08.439 23:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:09.380 23:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:09.380 23:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:09.639 nvme0n1 00:43:09.639 23:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:43:09.639 23:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:09.898 Running I/O for 2 seconds... 00:43:12.438 00:43:12.438 Latency(us) 00:43:12.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:12.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:12.438 nvme0n1 : 2.00 14489.77 56.60 0.00 0.00 8821.54 4660.34 19903.53 00:43:12.438 =================================================================================================================== 00:43:12.438 Total : 14489.77 56.60 0.00 0.00 8821.54 4660.34 19903.53 00:43:12.438 0 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:12.438 | select(.opcode=="crc32c") 00:43:12.438 | "\(.module_name) \(.executed)"' 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1059181 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1059181 ']' 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1059181 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1059181 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1059181' 00:43:12.438 killing process with pid 1059181 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1059181 00:43:12.438 Received shutdown signal, test time was about 2.000000 seconds 00:43:12.438 00:43:12.438 Latency(us) 00:43:12.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:12.438 =================================================================================================================== 00:43:12.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1059181 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1059843 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1059843 /var/tmp/bperf.sock 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1059843 ']' 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:12.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:12.438 23:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:12.698 [2024-07-22 23:23:48.773851] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:12.698 [2024-07-22 23:23:48.773944] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059843 ] 00:43:12.698 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:12.698 Zero copy mechanism will not be used. 00:43:12.698 EAL: No free 2048 kB hugepages reported on node 1 00:43:12.698 [2024-07-22 23:23:48.843958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:12.698 [2024-07-22 23:23:48.951680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:12.958 23:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:12.958 23:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:43:12.958 23:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:43:12.958 23:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:43:12.958 23:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:13.896 23:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:13.896 23:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:14.154 nvme0n1 00:43:14.154 23:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:43:14.154 23:23:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:14.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:14.412 Zero copy mechanism will not be used. 00:43:14.412 Running I/O for 2 seconds... 00:43:16.317 00:43:16.317 Latency(us) 00:43:16.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:16.317 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:43:16.317 nvme0n1 : 2.00 3959.64 494.95 0.00 0.00 4035.19 1104.40 5631.24 00:43:16.317 =================================================================================================================== 00:43:16.317 Total : 3959.64 494.95 0.00 0.00 4035.19 1104.40 5631.24 00:43:16.317 0 00:43:16.317 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:43:16.317 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:43:16.317 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:16.317 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:16.317 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:16.317 | select(.opcode=="crc32c") 00:43:16.317 | "\(.module_name) \(.executed)"' 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1059843 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1059843 ']' 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1059843 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1059843 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:16.575 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1059843' 00:43:16.575 killing process with pid 1059843 00:43:16.576 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1059843 00:43:16.576 Received shutdown signal, test time was about 2.000000 seconds 00:43:16.576 00:43:16.576 Latency(us) 00:43:16.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:16.576 =================================================================================================================== 00:43:16.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:16.576 23:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1059843 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1060258 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1060258 /var/tmp/bperf.sock 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1060258 ']' 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:16.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:16.834 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:16.834 [2024-07-22 23:23:53.131414] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:16.834 [2024-07-22 23:23:53.131513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060258 ] 00:43:17.094 EAL: No free 2048 kB hugepages reported on node 1 00:43:17.094 [2024-07-22 23:23:53.207669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:17.094 [2024-07-22 23:23:53.314719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.354 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:17.354 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:43:17.354 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:43:17.354 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:43:17.354 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:17.627 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:17.627 23:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:18.575 nvme0n1 00:43:18.575 23:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:43:18.575 23:23:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:18.575 Running I/O for 2 seconds... 00:43:20.479 00:43:20.479 Latency(us) 00:43:20.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.479 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:20.479 nvme0n1 : 2.01 16419.32 64.14 0.00 0.00 7780.97 4102.07 16796.63 00:43:20.479 =================================================================================================================== 00:43:20.479 Total : 16419.32 64.14 0.00 0.00 7780.97 4102.07 16796.63 00:43:20.479 0 00:43:20.479 23:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:43:20.479 23:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:43:20.737 23:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:20.737 23:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:20.737 | select(.opcode=="crc32c") 00:43:20.737 | "\(.module_name) \(.executed)"' 00:43:20.737 23:23:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:20.997 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1060258 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1060258 ']' 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1060258 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1060258 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1060258' 00:43:20.998 killing process with pid 1060258 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1060258 00:43:20.998 Received shutdown signal, test time was about 2.000000 seconds 00:43:20.998 00:43:20.998 Latency(us) 00:43:20.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.998 =================================================================================================================== 00:43:20.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:20.998 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1060258 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1060792 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1060792 /var/tmp/bperf.sock 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1060792 ']' 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:21.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:21.256 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:21.256 [2024-07-22 23:23:57.459635] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:21.257 [2024-07-22 23:23:57.459725] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060792 ] 00:43:21.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:21.257 Zero copy mechanism will not be used. 00:43:21.257 EAL: No free 2048 kB hugepages reported on node 1 00:43:21.257 [2024-07-22 23:23:57.529904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.514 [2024-07-22 23:23:57.634818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:21.514 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:21.514 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:43:21.514 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:43:21.514 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:43:21.514 23:23:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:21.772 23:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:21.772 23:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:22.342 nvme0n1 00:43:22.342 23:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:43:22.342 23:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:22.602 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:22.602 Zero copy mechanism will not be used. 00:43:22.602 Running I/O for 2 seconds... 00:43:24.504 00:43:24.504 Latency(us) 00:43:24.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:24.504 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:43:24.504 nvme0n1 : 2.00 4000.30 500.04 0.00 0.00 3989.73 3252.53 11796.48 00:43:24.504 =================================================================================================================== 00:43:24.504 Total : 4000.30 500.04 0.00 0.00 3989.73 3252.53 11796.48 00:43:24.504 0 00:43:24.504 23:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:43:24.504 23:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:43:24.504 23:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:24.504 23:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:24.504 | select(.opcode=="crc32c") 00:43:24.504 | "\(.module_name) \(.executed)"' 00:43:24.504 23:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1060792 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1060792 ']' 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1060792 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:25.072 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1060792 00:43:25.331 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:25.331 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:25.331 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1060792' 00:43:25.331 killing process with pid 1060792 00:43:25.331 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1060792 00:43:25.331 Received shutdown signal, test time was about 2.000000 seconds 00:43:25.331 00:43:25.331 Latency(us) 00:43:25.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:25.331 =================================================================================================================== 00:43:25.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:25.331 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1060792 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1059028 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1059028 ']' 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1059028 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1059028 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1059028' 00:43:25.590 killing process with pid 1059028 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1059028 00:43:25.590 23:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1059028 00:43:25.847 00:43:25.847 real 0m20.114s 00:43:25.847 user 0m40.776s 00:43:25.847 sys 0m5.557s 00:43:25.847 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:25.847 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:25.847 ************************************ 00:43:25.847 END TEST nvmf_digest_clean 00:43:25.847 ************************************ 00:43:25.847 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:43:25.847 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:43:25.847 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:43:25.848 ************************************ 00:43:25.848 START TEST nvmf_digest_error 00:43:25.848 ************************************ 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1061442 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1061442 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1061442 ']' 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:25.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:25.848 23:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:26.105 [2024-07-22 23:24:02.202332] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:26.105 [2024-07-22 23:24:02.202504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:26.105 EAL: No free 2048 kB hugepages reported on node 1 00:43:26.105 [2024-07-22 23:24:02.352507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:26.364 [2024-07-22 23:24:02.498614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:26.364 [2024-07-22 23:24:02.498707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:26.364 [2024-07-22 23:24:02.498743] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:26.364 [2024-07-22 23:24:02.498773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:26.364 [2024-07-22 23:24:02.498798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:26.364 [2024-07-22 23:24:02.498868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:27.303 [2024-07-22 23:24:03.426164] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:27.303 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:27.303 null0 00:43:27.303 [2024-07-22 23:24:03.598228] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:27.561 [2024-07-22 23:24:03.622504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1061621 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1061621 /var/tmp/bperf.sock 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1061621 ']' 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:27.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:27.561 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:27.561 [2024-07-22 23:24:03.676492] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:27.561 [2024-07-22 23:24:03.676582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061621 ] 00:43:27.561 EAL: No free 2048 kB hugepages reported on node 1 00:43:27.561 [2024-07-22 23:24:03.753062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:27.561 [2024-07-22 23:24:03.859026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:27.821 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:27.821 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:43:27.821 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:27.821 23:24:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:28.389 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:28.389 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:28.389 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:28.389 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:28.389 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:28.390 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:28.649 nvme0n1 00:43:28.649 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:43:28.649 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:28.649 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:28.649 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:28.649 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:28.649 23:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:28.909 Running I/O for 2 seconds... 00:43:28.909 [2024-07-22 23:24:05.207593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:28.909 [2024-07-22 23:24:05.207660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.909 [2024-07-22 23:24:05.207686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.224558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.224602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.224632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.242027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.242070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.242094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.257219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.257261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.257285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.276541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.276582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.276606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.292549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.292590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.292614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.308443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.308485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.308508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.324863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.324902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.324925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.341398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.341439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.341462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.360300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.360350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.360375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.379459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.379500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.379523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.394267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.394317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.394343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.168 [2024-07-22 23:24:05.415452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.168 [2024-07-22 23:24:05.415494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.168 [2024-07-22 23:24:05.415517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.169 [2024-07-22 23:24:05.437587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.169 [2024-07-22 23:24:05.437628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.169 [2024-07-22 23:24:05.437651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.169 [2024-07-22 23:24:05.457117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.169 [2024-07-22 23:24:05.457158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.169 [2024-07-22 23:24:05.457195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.169 [2024-07-22 23:24:05.471963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.169 [2024-07-22 23:24:05.472004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.169 [2024-07-22 23:24:05.472027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.489285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.429 [2024-07-22 23:24:05.489340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.429 [2024-07-22 23:24:05.489366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.510577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.429 [2024-07-22 23:24:05.510620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.429 [2024-07-22 23:24:05.510643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.529328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.429 [2024-07-22 23:24:05.529368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.429 [2024-07-22 23:24:05.529392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.544878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.429 [2024-07-22 23:24:05.544919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.429 [2024-07-22 23:24:05.544942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.561801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.429 [2024-07-22 23:24:05.561845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.429 [2024-07-22 23:24:05.561870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.578514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.429 [2024-07-22 23:24:05.578554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.429 [2024-07-22 23:24:05.578578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.598413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.429 [2024-07-22 23:24:05.598455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.429 [2024-07-22 23:24:05.598479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.429 [2024-07-22 23:24:05.613615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.613657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.613681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.430 [2024-07-22 23:24:05.632148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.632189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.632213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.430 [2024-07-22 23:24:05.652629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.652671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.652696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.430 [2024-07-22 23:24:05.668126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.668167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.668191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.430 [2024-07-22 23:24:05.689126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.689168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.689192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.430 [2024-07-22 23:24:05.708548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.708591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.708615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.430 [2024-07-22 23:24:05.723317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.723359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.723382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.430 [2024-07-22 23:24:05.739793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.430 [2024-07-22 23:24:05.739835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.430 [2024-07-22 23:24:05.739860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.758144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.758190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.758227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.775778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.775820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.775843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.790958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.791000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.791023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.812640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.812682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.812706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.834488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.834531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.834555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.851550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.851592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.851615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.868834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.868875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.868899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.886086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.886128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.886151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.903307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.903356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.903379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.921155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.921203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.921227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.935138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.935179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.935203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.954359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.954401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.690 [2024-07-22 23:24:05.954424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.690 [2024-07-22 23:24:05.973705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.690 [2024-07-22 23:24:05.973747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.691 [2024-07-22 23:24:05.973770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.691 [2024-07-22 23:24:05.993338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.691 [2024-07-22 23:24:05.993383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.691 [2024-07-22 23:24:05.993407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.014331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.014375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.014399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.031212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.031254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.031278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.045474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.045514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.045537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.065876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.065917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.065940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.085572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.085613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.085636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.099321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.099361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.099384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.119733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.119775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.119799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.141339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.141381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.141404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.161286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.161336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.161360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.176493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.176534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.176557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.195377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.195418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.195441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.212921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.212963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.212986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.231974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.232015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.232045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.948 [2024-07-22 23:24:06.249370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:29.948 [2024-07-22 23:24:06.249415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.948 [2024-07-22 23:24:06.249438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.266610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.266652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.266676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.281819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.281860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.281883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.299874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.299915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.299939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.314986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.315029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.315053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.330300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.330350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.330374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.348440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.348482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.348505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.364405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.364447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.364471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.380548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.380589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.380612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.396761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.396804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.396827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.412732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.412774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.412797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.427433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.427475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.427499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.444363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.444404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.444427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.460471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.460512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.460535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.476571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.476612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.476635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.492678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.492719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.492742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.206 [2024-07-22 23:24:06.508763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.206 [2024-07-22 23:24:06.508805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.206 [2024-07-22 23:24:06.508835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.526548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.526591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.526614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.542492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.542533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.542556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.558737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.558778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.558801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.574847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.574889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.574912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.590957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.590998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.591021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.607166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.607207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.607231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.623276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.623326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.623352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.639396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.639437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.639460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.655467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.655514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.655538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.671583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.671623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.671647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.687665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.687706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.687729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.703757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.703798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.703822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.719843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.719883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.735989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.736029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.736052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.752024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.752064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.752087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.465 [2024-07-22 23:24:06.768104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.465 [2024-07-22 23:24:06.768146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.465 [2024-07-22 23:24:06.768169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.723 [2024-07-22 23:24:06.784222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.723 [2024-07-22 23:24:06.784264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.723 [2024-07-22 23:24:06.784288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.723 [2024-07-22 23:24:06.800629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.723 [2024-07-22 23:24:06.800671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.723 [2024-07-22 23:24:06.800695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.723 [2024-07-22 23:24:06.816721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.723 [2024-07-22 23:24:06.816763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.723 [2024-07-22 23:24:06.816786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.723 [2024-07-22 23:24:06.832822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.723 [2024-07-22 23:24:06.832863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.723 [2024-07-22 23:24:06.832887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.723 [2024-07-22 23:24:06.848924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.723 [2024-07-22 23:24:06.848965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.723 [2024-07-22 23:24:06.848988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.723 [2024-07-22 23:24:06.867985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.723 [2024-07-22 23:24:06.868027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.723 [2024-07-22 23:24:06.868050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.723 [2024-07-22 23:24:06.883257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.723 [2024-07-22 23:24:06.883299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.883333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:06.899275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:06.899325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.899351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:06.915416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:06.915456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.915480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:06.931540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:06.931582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.931614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:06.949763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:06.949803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.949826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:06.965040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:06.965081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.965104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:06.980672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:06.980713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.980736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:06.996529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:06.996570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:06.996594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:07.012642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:07.012682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:07.012706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.724 [2024-07-22 23:24:07.029514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.724 [2024-07-22 23:24:07.029555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.724 [2024-07-22 23:24:07.029579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.046241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.046285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.046317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.060526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.060568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.060592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.077777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.077826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.077851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.093962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.094003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.094028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.110084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.110125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.110149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.126209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.126250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.126274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.143800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.143841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.143864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.160103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.160143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.160166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.176210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.176250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.176273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 [2024-07-22 23:24:07.192265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2358db0) 00:43:30.982 [2024-07-22 23:24:07.192306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:30.982 [2024-07-22 23:24:07.192344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:30.982 00:43:30.982 Latency(us) 00:43:30.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:30.982 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:30.982 nvme0n1 : 2.05 14624.80 57.13 0.00 0.00 8568.61 4466.16 50098.63 00:43:30.982 =================================================================================================================== 00:43:30.982 Total : 14624.80 57.13 0.00 0.00 8568.61 4466.16 50098.63 00:43:30.982 0 00:43:30.982 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:30.982 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:30.982 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:30.982 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:30.982 | .driver_specific 00:43:30.982 | .nvme_error 00:43:30.982 | .status_code 00:43:30.982 | .command_transient_transport_error' 00:43:31.241 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:43:31.241 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1061621 00:43:31.241 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1061621 ']' 00:43:31.241 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1061621 00:43:31.241 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:43:31.241 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:31.241 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1061621 00:43:31.504 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:31.504 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:31.504 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1061621' 00:43:31.504 killing process with pid 1061621 00:43:31.504 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1061621 00:43:31.504 Received shutdown signal, test time was about 2.000000 seconds 00:43:31.504 00:43:31.504 Latency(us) 00:43:31.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:31.504 =================================================================================================================== 00:43:31.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:31.504 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1061621 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1062143 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1062143 /var/tmp/bperf.sock 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1062143 ']' 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:31.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:31.765 23:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:31.765 [2024-07-22 23:24:07.919888] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:31.765 [2024-07-22 23:24:07.920073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062143 ] 00:43:31.765 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:31.765 Zero copy mechanism will not be used. 00:43:31.765 EAL: No free 2048 kB hugepages reported on node 1 00:43:31.765 [2024-07-22 23:24:08.038392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:32.024 [2024-07-22 23:24:08.144420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:32.283 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:32.283 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:43:32.283 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:32.283 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:32.541 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:32.541 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:32.541 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:32.541 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:32.541 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:32.541 23:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:33.106 nvme0n1 00:43:33.106 23:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:43:33.106 23:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:33.106 23:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:33.106 23:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:33.106 23:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:33.106 23:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:33.106 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:33.106 Zero copy mechanism will not be used. 00:43:33.106 Running I/O for 2 seconds... 00:43:33.106 [2024-07-22 23:24:09.337834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.337899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.337935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.345834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.345880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.345905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.354618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.354663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.354688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.363390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.363434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.363459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.373199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.373252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.373276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.380358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.380401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.380425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.388060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.388102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.388126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.395676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.395718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.395743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.402996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.403038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.403061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.106 [2024-07-22 23:24:09.410600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.106 [2024-07-22 23:24:09.410650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.106 [2024-07-22 23:24:09.410675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.418351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.418401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.418427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.426647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.426715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.434944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.434986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.435010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.442374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.442416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.442439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.449749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.449790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.449813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.457256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.457320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.457347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.464266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.464321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.464347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.471292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.471341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.471365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.479690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.479732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.479755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.488116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.488157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.367 [2024-07-22 23:24:09.488180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.367 [2024-07-22 23:24:09.496653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.367 [2024-07-22 23:24:09.496695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.496719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.505090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.505132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.505155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.514161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.514202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.514225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.522797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.522839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.522862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.531123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.531163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.531186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.539434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.539474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.539496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.548967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.549016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.549041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.556429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.556478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.556502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.564922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.564963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.564987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.573390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.573431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.573454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.581715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.581757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.581780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.589969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.590011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.590035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.598525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.598569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.598593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.607362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.607417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.607441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.614684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.614726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.614750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.623081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.623123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.623146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.632167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.632210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.632233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.639951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.639992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.640017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.648453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.648493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.648523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.656025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.656066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.656089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.664897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.664938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.664961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.368 [2024-07-22 23:24:09.673259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.368 [2024-07-22 23:24:09.673302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.368 [2024-07-22 23:24:09.673343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.678789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.678832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.678855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.686821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.686863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.686893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.695477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.695518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.695542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.704034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.704074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.704096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.713252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.713295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.713332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.721738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.721781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.721804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.730173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.730215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.730239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.737363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.737403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.737427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.745129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.745171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.745195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.752646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.752688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.752711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.761350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.761402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.761427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.770784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.770828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.770852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.779604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.779656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.779680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.786581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.786623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.786646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.794257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.794299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.794336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.801736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.801778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.801800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.810053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.810095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.810118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.818196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.818238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.818261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.826294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.826344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.826368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.835358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.835401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.835424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.842349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.842392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.842415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.848984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.849027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.849050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.855676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.855718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.855741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.862296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.862345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.862369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.869025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.869066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.869089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.876392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.630 [2024-07-22 23:24:09.876434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.630 [2024-07-22 23:24:09.876459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.630 [2024-07-22 23:24:09.883146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.883187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.883210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.631 [2024-07-22 23:24:09.889832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.889872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.889904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.631 [2024-07-22 23:24:09.896422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.896463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.896486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.631 [2024-07-22 23:24:09.903578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.903630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.903653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.631 [2024-07-22 23:24:09.912166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.912207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.912230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.631 [2024-07-22 23:24:09.920389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.920432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.920455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.631 [2024-07-22 23:24:09.925005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.925045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.925068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.631 [2024-07-22 23:24:09.933084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.631 [2024-07-22 23:24:09.933124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.631 [2024-07-22 23:24:09.933146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.911 [2024-07-22 23:24:09.940729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.911 [2024-07-22 23:24:09.940771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.911 [2024-07-22 23:24:09.940794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.911 [2024-07-22 23:24:09.948524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.911 [2024-07-22 23:24:09.948567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.911 [2024-07-22 23:24:09.948591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.911 [2024-07-22 23:24:09.957056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:09.957099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:09.957123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:09.965238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:09.965281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:09.965305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:09.974348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:09.974391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:09.974414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:09.983408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:09.983450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:09.983474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:09.991637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:09.991679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:09.991703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.000129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.000171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.000195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.010193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.010269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.010296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.018595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.018641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.018664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.027089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.027138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.027173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.035888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.035936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.035961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.044740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.044787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.044811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.054964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.055012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.055036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.064060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.064127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.073967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.074021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.074047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.083039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.083082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.083106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.094407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.094450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.094474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.104636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.104680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.104704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.115257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.115320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.115348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.125976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.126020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.126044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.135222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.135265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.135288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.144153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.144194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.144217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.152723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.152763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.152787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.161490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.161531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.161554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.169808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.169849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.169872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.177942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.177983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.178006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.185555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.185595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.912 [2024-07-22 23:24:10.185618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:33.912 [2024-07-22 23:24:10.192971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.912 [2024-07-22 23:24:10.193011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.913 [2024-07-22 23:24:10.193034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:33.913 [2024-07-22 23:24:10.200906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.913 [2024-07-22 23:24:10.200950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.913 [2024-07-22 23:24:10.200983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:33.913 [2024-07-22 23:24:10.208956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:33.913 [2024-07-22 23:24:10.209006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:33.913 [2024-07-22 23:24:10.209032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.207 [2024-07-22 23:24:10.217087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.207 [2024-07-22 23:24:10.217135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.207 [2024-07-22 23:24:10.217160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.207 [2024-07-22 23:24:10.224794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.207 [2024-07-22 23:24:10.224836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.207 [2024-07-22 23:24:10.224860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.207 [2024-07-22 23:24:10.233169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.207 [2024-07-22 23:24:10.233216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.207 [2024-07-22 23:24:10.233245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.207 [2024-07-22 23:24:10.241444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.207 [2024-07-22 23:24:10.241487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.207 [2024-07-22 23:24:10.241510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.207 [2024-07-22 23:24:10.249818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.249860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.249883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.258175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.258216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.258252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.266647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.266687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.266709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.274878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.274921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.274944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.283602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.283644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.283667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.291707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.291747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.291770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.298794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.298835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.298857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.305648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.305688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.305711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.313677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.313718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.313741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.321537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.321587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.321609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.329660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.329708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.329732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.337802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.337841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.337863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.345300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.345349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.345372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.353250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.353293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.353331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.361225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.361265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.361288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.369127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.369167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.369189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.377043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.377084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.377107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.384987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.385028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.393093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.393133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.393156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.400967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.401007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.401030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.409147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.409187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.409210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.417701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.417740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.417762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.426169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.426209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.426231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.434842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.434883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.434906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.443475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.443515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.443538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.452067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.452106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.208 [2024-07-22 23:24:10.452128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.208 [2024-07-22 23:24:10.460563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.208 [2024-07-22 23:24:10.460603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.209 [2024-07-22 23:24:10.460626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.209 [2024-07-22 23:24:10.469047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.209 [2024-07-22 23:24:10.469087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.209 [2024-07-22 23:24:10.469116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.209 [2024-07-22 23:24:10.477554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.209 [2024-07-22 23:24:10.477594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.209 [2024-07-22 23:24:10.477618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.209 [2024-07-22 23:24:10.486110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.209 [2024-07-22 23:24:10.486150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.209 [2024-07-22 23:24:10.486172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.209 [2024-07-22 23:24:10.494640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.209 [2024-07-22 23:24:10.494681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.209 [2024-07-22 23:24:10.494704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.209 [2024-07-22 23:24:10.503068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.209 [2024-07-22 23:24:10.503108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.209 [2024-07-22 23:24:10.503132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.209 [2024-07-22 23:24:10.511527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.209 [2024-07-22 23:24:10.511566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.209 [2024-07-22 23:24:10.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.520045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.520086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.520110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.528854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.528895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.528918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.537448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.537488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.537511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.545892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.545940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.545963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.554387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.554426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.554449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.562723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.562763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.562785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.571194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.571234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.571257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.579716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.579757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.579781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.588222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.588262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.588284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.596613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.596653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.596675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.604297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.604349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.604373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.610866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.610907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.610930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.617584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.617625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.617649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.624232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.624275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.624298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.630935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.630975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.630998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.637590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.637637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.637663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.644654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.644694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.644717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.651975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.652015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.652039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.658923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.658963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.658985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.666588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.666634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.666657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.674288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.674352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.674377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.681893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.681934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.681957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.689276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.689327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.689361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.697279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.697334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.697362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.706319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.470 [2024-07-22 23:24:10.706361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.470 [2024-07-22 23:24:10.706385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.470 [2024-07-22 23:24:10.715737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.715780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.715803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.471 [2024-07-22 23:24:10.725131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.725174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.725197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.471 [2024-07-22 23:24:10.733909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.733952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.733976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.471 [2024-07-22 23:24:10.743366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.743408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.743432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.471 [2024-07-22 23:24:10.751566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.751607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.751631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.471 [2024-07-22 23:24:10.761065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.761108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.761131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.471 [2024-07-22 23:24:10.769820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.769862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.769885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.471 [2024-07-22 23:24:10.778734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.471 [2024-07-22 23:24:10.778776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.471 [2024-07-22 23:24:10.778799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.787167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.787211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.787235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.795390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.795431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.795454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.804066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.804106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.804129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.812634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.812676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.812699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.821698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.821739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.821772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.830878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.830920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.830943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.839814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.839856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.839880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.848613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.848655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.848679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.858265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.858321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.858348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.867359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.867401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.867424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.875721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.875764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.875789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.884419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.730 [2024-07-22 23:24:10.884463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.730 [2024-07-22 23:24:10.884487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.730 [2024-07-22 23:24:10.893333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.893377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.893400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.901386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.901438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.901463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.910277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.910332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.910363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.919589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.919632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.919655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.928474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.928517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.928541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.937623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.937666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.937691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.947908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.947951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.947975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.958290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.958346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.958380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.968475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.968518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.968542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.978822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.978866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.978890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.989737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.989782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.989806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:10.999411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:10.999454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:10.999478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:11.009846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:11.009890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:11.009914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:11.019899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:11.019942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:11.019966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:11.026679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:11.026723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:11.026747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.731 [2024-07-22 23:24:11.034720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.731 [2024-07-22 23:24:11.034764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.731 [2024-07-22 23:24:11.034788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.044866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.044911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.044935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.054272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.054327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.054354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.063660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.063704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.063740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.072385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.072430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.072454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.081116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.081159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.081183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.089428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.089470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.089494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.098275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.098334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.098361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.106488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.106530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.106554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.115041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.115085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.115109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.123445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.123485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.123508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.132183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.132225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.132248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.141380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.141432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.141456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.150843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.150886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.150909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.159999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.160040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.160063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.169106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.169148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.169171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.178448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.178490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.178514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.187258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.187300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.187333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.196442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.196484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.196508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.204889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.204930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.204953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.214086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.214127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.214151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.224039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.224081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.224106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.233295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.233346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.233371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.242943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.243007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.251100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.251141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.251164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.260638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.992 [2024-07-22 23:24:11.260679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.992 [2024-07-22 23:24:11.260702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.992 [2024-07-22 23:24:11.266979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.993 [2024-07-22 23:24:11.267020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.993 [2024-07-22 23:24:11.267043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:34.993 [2024-07-22 23:24:11.274426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.993 [2024-07-22 23:24:11.274467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.993 [2024-07-22 23:24:11.274491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:34.993 [2024-07-22 23:24:11.284024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.993 [2024-07-22 23:24:11.284066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.993 [2024-07-22 23:24:11.284089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.993 [2024-07-22 23:24:11.293079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.993 [2024-07-22 23:24:11.293120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.993 [2024-07-22 23:24:11.293153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:34.993 [2024-07-22 23:24:11.302017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:34.993 [2024-07-22 23:24:11.302059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.993 [2024-07-22 23:24:11.302083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:35.251 [2024-07-22 23:24:11.310844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:35.251 [2024-07-22 23:24:11.310887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:35.251 [2024-07-22 23:24:11.310911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:35.251 [2024-07-22 23:24:11.320922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:35.251 [2024-07-22 23:24:11.320965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:35.251 [2024-07-22 23:24:11.320989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:35.251 [2024-07-22 23:24:11.330610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156f840) 00:43:35.251 [2024-07-22 23:24:11.330651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:35.251 [2024-07-22 23:24:11.330674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:35.251 00:43:35.251 Latency(us) 00:43:35.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:35.251 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:43:35.251 nvme0n1 : 2.00 3691.05 461.38 0.00 0.00 4328.52 1074.06 11311.03 00:43:35.251 =================================================================================================================== 00:43:35.251 Total : 3691.05 461.38 0.00 0.00 4328.52 1074.06 11311.03 00:43:35.251 0 00:43:35.251 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:35.251 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:35.251 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:35.251 | .driver_specific 00:43:35.251 | .nvme_error 00:43:35.251 | .status_code 00:43:35.251 | .command_transient_transport_error' 00:43:35.251 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1062143 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1062143 ']' 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1062143 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1062143 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1062143' 00:43:35.510 killing process with pid 1062143 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1062143 00:43:35.510 Received shutdown signal, test time was about 2.000000 seconds 00:43:35.510 00:43:35.510 Latency(us) 00:43:35.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:35.510 =================================================================================================================== 00:43:35.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:35.510 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1062143 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1063070 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1063070 /var/tmp/bperf.sock 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1063070 ']' 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:35.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:35.770 23:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:35.770 [2024-07-22 23:24:12.024611] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:35.770 [2024-07-22 23:24:12.024797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063070 ] 00:43:36.030 EAL: No free 2048 kB hugepages reported on node 1 00:43:36.030 [2024-07-22 23:24:12.143641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:36.030 [2024-07-22 23:24:12.255019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:36.290 23:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:36.290 23:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:43:36.290 23:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:36.290 23:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:36.858 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:36.858 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.858 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:36.858 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.858 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:36.858 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:37.799 nvme0n1 00:43:37.799 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:43:37.799 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.799 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:37.799 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.799 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:37.799 23:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:37.799 Running I/O for 2 seconds... 00:43:37.799 [2024-07-22 23:24:13.959646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ee5c8 00:43:37.799 [2024-07-22 23:24:13.960916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:13.960967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:13.974677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fac10 00:43:37.799 [2024-07-22 23:24:13.975914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:13.975953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:13.991909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ebfd0 00:43:37.799 [2024-07-22 23:24:13.992876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:13.992915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:14.008463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f3e60 00:43:37.799 [2024-07-22 23:24:14.009650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:14.009688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:14.023393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6738 00:43:37.799 [2024-07-22 23:24:14.025586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:14.025624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:14.037050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f2d80 00:43:37.799 [2024-07-22 23:24:14.038064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:14.038100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:14.053542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fac10 00:43:37.799 [2024-07-22 23:24:14.054752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:14.054790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:14.070057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f2510 00:43:37.799 [2024-07-22 23:24:14.071505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:14.071544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:14.086559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eee38 00:43:37.799 [2024-07-22 23:24:14.088189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:14.088227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:37.799 [2024-07-22 23:24:14.103056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f6020 00:43:37.799 [2024-07-22 23:24:14.104913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:37.799 [2024-07-22 23:24:14.104952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.119550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f46d0 00:43:38.059 [2024-07-22 23:24:14.121614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.059 [2024-07-22 23:24:14.121654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.136086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e4578 00:43:38.059 [2024-07-22 23:24:14.138375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.059 [2024-07-22 23:24:14.138413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.152624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ed4e8 00:43:38.059 [2024-07-22 23:24:14.155124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.059 [2024-07-22 23:24:14.155162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.163850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f35f0 00:43:38.059 [2024-07-22 23:24:14.164874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.059 [2024-07-22 23:24:14.164911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.178776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e95a0 00:43:38.059 [2024-07-22 23:24:14.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.059 [2024-07-22 23:24:14.179821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.199091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fc560 00:43:38.059 [2024-07-22 23:24:14.201168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.059 [2024-07-22 23:24:14.201206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.212184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f2d80 00:43:38.059 [2024-07-22 23:24:14.213387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.059 [2024-07-22 23:24:14.213427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:38.059 [2024-07-22 23:24:14.228426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fb8b8 00:43:38.059 [2024-07-22 23:24:14.229829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.229868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.244396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6b70 00:43:38.060 [2024-07-22 23:24:14.245840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.245878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.259070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f3e60 00:43:38.060 [2024-07-22 23:24:14.260476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.260513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.275582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fc128 00:43:38.060 [2024-07-22 23:24:14.277184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.277222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.292722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e23b8 00:43:38.060 [2024-07-22 23:24:14.294067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.294105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.307628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f6020 00:43:38.060 [2024-07-22 23:24:14.310004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.310042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.321163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f46d0 00:43:38.060 [2024-07-22 23:24:14.322337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.322373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.337658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f92c0 00:43:38.060 [2024-07-22 23:24:14.339048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.339085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:38.060 [2024-07-22 23:24:14.354151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f81e0 00:43:38.060 [2024-07-22 23:24:14.355764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.060 [2024-07-22 23:24:14.355801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.370717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eff18 00:43:38.320 [2024-07-22 23:24:14.372481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.372520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.387248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e3d08 00:43:38.320 [2024-07-22 23:24:14.389289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.389335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.403759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e1b48 00:43:38.320 [2024-07-22 23:24:14.405996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.406033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.420261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e5658 00:43:38.320 [2024-07-22 23:24:14.422736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.422772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.436748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ebfd0 00:43:38.320 [2024-07-22 23:24:14.439402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.439446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.449360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f1868 00:43:38.320 [2024-07-22 23:24:14.451162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.451199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.465851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ebfd0 00:43:38.320 [2024-07-22 23:24:14.467870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.467909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.482416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ff3c8 00:43:38.320 [2024-07-22 23:24:14.484639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.484677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.498901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f7970 00:43:38.320 [2024-07-22 23:24:14.501340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.501377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:38.320 [2024-07-22 23:24:14.515396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ed0b0 00:43:38.320 [2024-07-22 23:24:14.518059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.320 [2024-07-22 23:24:14.518097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:38.321 [2024-07-22 23:24:14.526546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e2c28 00:43:38.321 [2024-07-22 23:24:14.527715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.321 [2024-07-22 23:24:14.527752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:38.321 [2024-07-22 23:24:14.541477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ec840 00:43:38.321 [2024-07-22 23:24:14.542610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.321 [2024-07-22 23:24:14.542647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:38.321 [2024-07-22 23:24:14.558013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0bc0 00:43:38.321 [2024-07-22 23:24:14.559401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.321 [2024-07-22 23:24:14.559439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:38.321 [2024-07-22 23:24:14.574502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fb048 00:43:38.321 [2024-07-22 23:24:14.576088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.321 [2024-07-22 23:24:14.576126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:38.321 [2024-07-22 23:24:14.591059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f2d80 00:43:38.321 [2024-07-22 23:24:14.592855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.321 [2024-07-22 23:24:14.592894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:38.321 [2024-07-22 23:24:14.607730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e5658 00:43:38.321 [2024-07-22 23:24:14.609741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.321 [2024-07-22 23:24:14.609780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:38.321 [2024-07-22 23:24:14.624240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0bc0 00:43:38.321 [2024-07-22 23:24:14.626440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.321 [2024-07-22 23:24:14.626480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.640831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ee190 00:43:38.581 [2024-07-22 23:24:14.643251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.643290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.652265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eb328 00:43:38.581 [2024-07-22 23:24:14.653418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.653455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.668782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e5658 00:43:38.581 [2024-07-22 23:24:14.670132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.670171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.684843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fda78 00:43:38.581 [2024-07-22 23:24:14.686211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.686258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.701073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f6cc8 00:43:38.581 [2024-07-22 23:24:14.702014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.702052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.719956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f81e0 00:43:38.581 [2024-07-22 23:24:14.722419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.722464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.731359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f81e0 00:43:38.581 [2024-07-22 23:24:14.732518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.732556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.747409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e5a90 00:43:38.581 [2024-07-22 23:24:14.748552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.581 [2024-07-22 23:24:14.748595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:38.581 [2024-07-22 23:24:14.766001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e8d30 00:43:38.582 [2024-07-22 23:24:14.767864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.767901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.780945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f46d0 00:43:38.582 [2024-07-22 23:24:14.782785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.782821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.797459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190de038 00:43:38.582 [2024-07-22 23:24:14.799519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.799556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.813973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e1710 00:43:38.582 [2024-07-22 23:24:14.816237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.816274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.830486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f31b8 00:43:38.582 [2024-07-22 23:24:14.832952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.832989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.841635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fc560 00:43:38.582 [2024-07-22 23:24:14.842634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.842678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.859470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0ff8 00:43:38.582 [2024-07-22 23:24:14.861841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.861879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.873022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ebfd0 00:43:38.582 [2024-07-22 23:24:14.874212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.874248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:38.582 [2024-07-22 23:24:14.889552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e5220 00:43:38.582 [2024-07-22 23:24:14.890961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.582 [2024-07-22 23:24:14.890998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:38.842 [2024-07-22 23:24:14.906068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6738 00:43:38.842 [2024-07-22 23:24:14.907693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.842 [2024-07-22 23:24:14.907731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:38.842 [2024-07-22 23:24:14.922567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f5be8 00:43:38.843 [2024-07-22 23:24:14.924402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:14.924439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:14.937271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e1b48 00:43:38.843 [2024-07-22 23:24:14.938477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:14.938514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:14.951675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fbcf0 00:43:38.843 [2024-07-22 23:24:14.952861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:14.952899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:14.968235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e84c0 00:43:38.843 [2024-07-22 23:24:14.969601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:14.969638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:14.984808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f8a50 00:43:38.843 [2024-07-22 23:24:14.986419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:14.986457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.001373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ec408 00:43:38.843 [2024-07-22 23:24:15.003202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.003241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.017888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f6cc8 00:43:38.843 [2024-07-22 23:24:15.019929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.019967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.034400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eaab8 00:43:38.843 [2024-07-22 23:24:15.036654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.036690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.050873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f9b30 00:43:38.843 [2024-07-22 23:24:15.053343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.053380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.067378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fb480 00:43:38.843 [2024-07-22 23:24:15.070050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.070088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.078564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0bc0 00:43:38.843 [2024-07-22 23:24:15.079765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.079801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.096513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e1710 00:43:38.843 [2024-07-22 23:24:15.098534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.098571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.113006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190feb58 00:43:38.843 [2024-07-22 23:24:15.115254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.115291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.129504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ef270 00:43:38.843 [2024-07-22 23:24:15.131951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.131988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:38.843 [2024-07-22 23:24:15.144206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0788 00:43:38.843 [2024-07-22 23:24:15.146019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:38.843 [2024-07-22 23:24:15.146056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:39.104 [2024-07-22 23:24:15.158545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f3e60 00:43:39.104 [2024-07-22 23:24:15.160940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.104 [2024-07-22 23:24:15.160979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:39.104 [2024-07-22 23:24:15.172099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ebfd0 00:43:39.105 [2024-07-22 23:24:15.173277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.173323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.188611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190de038 00:43:39.105 [2024-07-22 23:24:15.190026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.190063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.205138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f8618 00:43:39.105 [2024-07-22 23:24:15.206747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.206784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.221778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e7818 00:43:39.105 [2024-07-22 23:24:15.223590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.223627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.238295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0788 00:43:39.105 [2024-07-22 23:24:15.240341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.240380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.254809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fc128 00:43:39.105 [2024-07-22 23:24:15.257041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.257092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.271304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e0a68 00:43:39.105 [2024-07-22 23:24:15.273757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.273796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.287795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fb048 00:43:39.105 [2024-07-22 23:24:15.290461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.290499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.298910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f2510 00:43:39.105 [2024-07-22 23:24:15.300068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.300105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.313736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e38d0 00:43:39.105 [2024-07-22 23:24:15.314878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.314915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.330133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e99d8 00:43:39.105 [2024-07-22 23:24:15.331516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.331553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.349322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f46d0 00:43:39.105 [2024-07-22 23:24:15.351378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.351415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.364248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6fa8 00:43:39.105 [2024-07-22 23:24:15.366289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.366335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.380787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e7818 00:43:39.105 [2024-07-22 23:24:15.383046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.383083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.397295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190ed0b0 00:43:39.105 [2024-07-22 23:24:15.399793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.399838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:39.105 [2024-07-22 23:24:15.412029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eaab8 00:43:39.105 [2024-07-22 23:24:15.413869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.105 [2024-07-22 23:24:15.413908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.426400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e99d8 00:43:39.366 [2024-07-22 23:24:15.428758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.428797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.439983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190dfdc0 00:43:39.366 [2024-07-22 23:24:15.441179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.441215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.456496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fbcf0 00:43:39.366 [2024-07-22 23:24:15.457887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.457924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.473010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f46d0 00:43:39.366 [2024-07-22 23:24:15.474622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.474658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.489562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0bc0 00:43:39.366 [2024-07-22 23:24:15.491387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.491425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.506078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eaab8 00:43:39.366 [2024-07-22 23:24:15.508122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.508160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.522611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f7da8 00:43:39.366 [2024-07-22 23:24:15.524861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.524898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.539124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f81e0 00:43:39.366 [2024-07-22 23:24:15.541591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.541628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.555638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6300 00:43:39.366 [2024-07-22 23:24:15.558320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.558357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.566871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6738 00:43:39.366 [2024-07-22 23:24:15.568074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.568112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.581793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190efae0 00:43:39.366 [2024-07-22 23:24:15.582987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.583024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.598306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f2510 00:43:39.366 [2024-07-22 23:24:15.599718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.599756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.614866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f46d0 00:43:39.366 [2024-07-22 23:24:15.616492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.616528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.631381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fcdd0 00:43:39.366 [2024-07-22 23:24:15.633199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.633236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.647897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190feb58 00:43:39.366 [2024-07-22 23:24:15.649945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.649982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:39.366 [2024-07-22 23:24:15.664420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f5be8 00:43:39.366 [2024-07-22 23:24:15.666671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.366 [2024-07-22 23:24:15.666708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:39.626 [2024-07-22 23:24:15.680910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f6890 00:43:39.626 [2024-07-22 23:24:15.683380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.626 [2024-07-22 23:24:15.683418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:39.626 [2024-07-22 23:24:15.697459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190fbcf0 00:43:39.626 [2024-07-22 23:24:15.700145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.626 [2024-07-22 23:24:15.700182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:39.626 [2024-07-22 23:24:15.708631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f8e88 00:43:39.626 [2024-07-22 23:24:15.709844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.626 [2024-07-22 23:24:15.709880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:39.626 [2024-07-22 23:24:15.723547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190dfdc0 00:43:39.626 [2024-07-22 23:24:15.724735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.626 [2024-07-22 23:24:15.724771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:39.626 [2024-07-22 23:24:15.740045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f7da8 00:43:39.626 [2024-07-22 23:24:15.741457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.626 [2024-07-22 23:24:15.741494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:39.626 [2024-07-22 23:24:15.756581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f46d0 00:43:39.626 [2024-07-22 23:24:15.758212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.626 [2024-07-22 23:24:15.758250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:39.626 [2024-07-22 23:24:15.773141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f0788 00:43:39.627 [2024-07-22 23:24:15.774948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.789646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6fa8 00:43:39.627 [2024-07-22 23:24:15.791698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.791735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.806216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e6300 00:43:39.627 [2024-07-22 23:24:15.808427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.808472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.822737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f81e0 00:43:39.627 [2024-07-22 23:24:15.825211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.825250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.839259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190f2510 00:43:39.627 [2024-07-22 23:24:15.841929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.841966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.850666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e01f8 00:43:39.627 [2024-07-22 23:24:15.852085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.852122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.867188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e0630 00:43:39.627 [2024-07-22 23:24:15.868821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.868858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.883653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190feb58 00:43:39.627 [2024-07-22 23:24:15.885495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.885533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.899539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eaab8 00:43:39.627 [2024-07-22 23:24:15.900720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.900757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.914459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190eff18 00:43:39.627 [2024-07-22 23:24:15.916626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.916663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:39.627 [2024-07-22 23:24:15.928013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e1710 00:43:39.627 [2024-07-22 23:24:15.929027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.627 [2024-07-22 23:24:15.929064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:39.887 [2024-07-22 23:24:15.944561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77c90) with pdu=0x2000190e49b0 00:43:39.887 [2024-07-22 23:24:15.945769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:39.887 [2024-07-22 23:24:15.945808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:39.887 00:43:39.887 Latency(us) 00:43:39.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:39.887 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:39.887 nvme0n1 : 2.01 16158.03 63.12 0.00 0.00 7912.14 4077.80 21748.24 00:43:39.887 =================================================================================================================== 00:43:39.887 Total : 16158.03 63.12 0.00 0.00 7912.14 4077.80 21748.24 00:43:39.887 0 00:43:39.887 23:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:39.887 23:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:39.887 23:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:39.887 23:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:39.887 | .driver_specific 00:43:39.887 | .nvme_error 00:43:39.887 | .status_code 00:43:39.887 | .command_transient_transport_error' 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 127 > 0 )) 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1063070 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1063070 ']' 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1063070 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1063070 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1063070' 00:43:40.458 killing process with pid 1063070 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1063070 00:43:40.458 Received shutdown signal, test time was about 2.000000 seconds 00:43:40.458 00:43:40.458 Latency(us) 00:43:40.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:40.458 =================================================================================================================== 00:43:40.458 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:40.458 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1063070 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1063602 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1063602 /var/tmp/bperf.sock 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1063602 ']' 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:40.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:40.718 23:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:40.718 [2024-07-22 23:24:16.957809] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:40.718 [2024-07-22 23:24:16.957993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063602 ] 00:43:40.718 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:40.718 Zero copy mechanism will not be used. 00:43:40.718 EAL: No free 2048 kB hugepages reported on node 1 00:43:40.978 [2024-07-22 23:24:17.061307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:40.978 [2024-07-22 23:24:17.171385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:41.238 23:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:41.238 23:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:43:41.239 23:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:41.239 23:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:41.809 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:41.809 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.809 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:41.809 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.809 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:41.809 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:42.749 nvme0n1 00:43:42.749 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:43:42.749 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:42.749 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:42.749 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:42.749 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:42.749 23:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:43.009 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:43.009 Zero copy mechanism will not be used. 00:43:43.009 Running I/O for 2 seconds... 00:43:43.009 [2024-07-22 23:24:19.166131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.166562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.166612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.174366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.174825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.174865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.182629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.183092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.183130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.190898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.191371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.191410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.199085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.199548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.199588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.207373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.207837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.207875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.215599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.215996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.216034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.223696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.224155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.224202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.231870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.009 [2024-07-22 23:24:19.232264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.009 [2024-07-22 23:24:19.232301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.009 [2024-07-22 23:24:19.239940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.240397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.240435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.248181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.248643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.248681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.256690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.257144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.257182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.264871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.265341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.265380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.273182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.273637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.273676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.281676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.281784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.281821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.290749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.291198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.291236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.299606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.300066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.300105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.308130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.308521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.308560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.010 [2024-07-22 23:24:19.316813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.010 [2024-07-22 23:24:19.317270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.010 [2024-07-22 23:24:19.317319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.325126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.325581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.325621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.333853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.334322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.334362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.342416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.342812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.342852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.350712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.351185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.351225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.359108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.359576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.359616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.367416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.367888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.367927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.375709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.376157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.376196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.383985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.384452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.384491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.392306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.392711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.392750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.400418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.400812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.400850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.408591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.270 [2024-07-22 23:24:19.409002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.270 [2024-07-22 23:24:19.409040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.270 [2024-07-22 23:24:19.416895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.417360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.417400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.425302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.425720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.425759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.433607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.434060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.434098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.441955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.442425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.442471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.450246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.450716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.450754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.458295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.458729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.458767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.466336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.466767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.466806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.475015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.475481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.475521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.483397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.483799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.483838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.491708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.492118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.492156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.499684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.500093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.500131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.507694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.508099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.508137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.515730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.516132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.516170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.523709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.524143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.524181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.531450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.531822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.531860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.539180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.539575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.539614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.546933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.547377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.547415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.554786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.555160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.555198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.562565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.562930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.562968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.570306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.570733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.570771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.271 [2024-07-22 23:24:19.578122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.271 [2024-07-22 23:24:19.578521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.271 [2024-07-22 23:24:19.578560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.532 [2024-07-22 23:24:19.585781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.532 [2024-07-22 23:24:19.586156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.586195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.593678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.594068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.594107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.601979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.602363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.602402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.609889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.610262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.610301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.617921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.618299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.618348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.625833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.626209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.626248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.634461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.634827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.634865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.642621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.643048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.643087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.650851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.651222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.651268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.658919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.659324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.659369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.666720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.667094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.667133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.674573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.675010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.675049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.682508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.682938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.682976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.690238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.690666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.690705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.698079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.698523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.698561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.705998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.706383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.706422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.714271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.714715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.714754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.722685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.723119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.723157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.730662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.731040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.731078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.738648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.739021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.739059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.746801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.747173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.747211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.754922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.755291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.763068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.763451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.763489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.771257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.771692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.771730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.779462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.779905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.779943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.787254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.787696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.787735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.795117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.795542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.533 [2024-07-22 23:24:19.795580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.533 [2024-07-22 23:24:19.803054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.533 [2024-07-22 23:24:19.803432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.534 [2024-07-22 23:24:19.803471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.534 [2024-07-22 23:24:19.810879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.534 [2024-07-22 23:24:19.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.534 [2024-07-22 23:24:19.811356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.534 [2024-07-22 23:24:19.818763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.534 [2024-07-22 23:24:19.819134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.534 [2024-07-22 23:24:19.819171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.534 [2024-07-22 23:24:19.826520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.534 [2024-07-22 23:24:19.826964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.534 [2024-07-22 23:24:19.827002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.534 [2024-07-22 23:24:19.834596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.534 [2024-07-22 23:24:19.834965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.534 [2024-07-22 23:24:19.835003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.843040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.843427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.843470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.851019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.851450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.851489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.859493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.859920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.859965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.867601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.868035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.868072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.875451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.875883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.875921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.883233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.883859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.883898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.891189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.891292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.891340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.899189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.899291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.899338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.907154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.907259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.907295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.915409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.915512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.915547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.923688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.923796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.923832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.931203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.931306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.931359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.939042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.939149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.795 [2024-07-22 23:24:19.939187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.795 [2024-07-22 23:24:19.947049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.795 [2024-07-22 23:24:19.947145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:19.947182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:19.954948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:19.955067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:19.955102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:19.963042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:19.963146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:19.963181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:19.971160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:19.971268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:19.971304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:19.979228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:19.979339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:19.979376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:19.987259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:19.987375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:19.987411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:19.995436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:19.995540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:19.995576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.004307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.004437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.004480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.012474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.012588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.012626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.020627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.020740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.020780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.028804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.028915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.028955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.037055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.037166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.037207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.045200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.045339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.045379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.054406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.054526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.054569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.062492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.062608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.062644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.070501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.070608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.070657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.078460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.078563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.078601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.086484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.086589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.086626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.094430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.094574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.796 [2024-07-22 23:24:20.102678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:43.796 [2024-07-22 23:24:20.102786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.796 [2024-07-22 23:24:20.102824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.111187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.111326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.111366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.119065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.119169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.119206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.126922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.127022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.127059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.134825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.134933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.134969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.142730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.142846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.142881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.151481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.151653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.151691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.160385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.160495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.160531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.168504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.168607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.168642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.176617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.176719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.176756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.184879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.184982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.185020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.193348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.193467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.193503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.202176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.202294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.202341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.210657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.210773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.210808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.218928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.219046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.219083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.227636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.227766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.227804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.236294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.236412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.236448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.244521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.244628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.244664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.253062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.253164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.253200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.261830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.261934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.261970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.270437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.270542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.270578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.278345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.278453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.278488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.286229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.286342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.286385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.294224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.294336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.294372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.302264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.302383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.302419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.310358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.310470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.310506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.318232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.318351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.318391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.326165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.326266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.326302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.334243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.334379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.334418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.342199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.342305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.342351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.350185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.350291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.350337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.358307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.358437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.358473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.057 [2024-07-22 23:24:20.366345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.057 [2024-07-22 23:24:20.366458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.057 [2024-07-22 23:24:20.366494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.374302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.374425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.374462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.382321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.382435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.382479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.390335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.390451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.390487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.398353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.398466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.398501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.406433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.406541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.406577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.414459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.414566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.414602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.422645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.422750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.422786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.430749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.430854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.430889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.438747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.438850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.318 [2024-07-22 23:24:20.438887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.318 [2024-07-22 23:24:20.446735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.318 [2024-07-22 23:24:20.446838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.446874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.454747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.454853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.454889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.462808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.462913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.462949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.470893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.471114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.471152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.479719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.479888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.479926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.487747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.487850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.487886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.495871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.495978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.496020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.503954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.504071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.504107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.512140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.512265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.512302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.520675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.520907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.520945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.529212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.529371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.529409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.538042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.538197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.538235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.546916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.547133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.547171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.555831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.556048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.556086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.564727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.564899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.564937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.574177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.574357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.574397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.584088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.584215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.584252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.593210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.593344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.593381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.601559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.601675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.601711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.609498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.609622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.609659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.617242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.617353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.617389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.319 [2024-07-22 23:24:20.625698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.319 [2024-07-22 23:24:20.625951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.319 [2024-07-22 23:24:20.625989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.634672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.634875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.634913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.643801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.643986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.644024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.653755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.653908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.653947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.661618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.661779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.661816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.669546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.669675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.669713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.677320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.677421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.677457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.684533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.684636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.684671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.692399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.692501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.692539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.700175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.700287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.700335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.708122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.708229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.708265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.715906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.716011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.716046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.723886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.723991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.579 [2024-07-22 23:24:20.724027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.579 [2024-07-22 23:24:20.731909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.579 [2024-07-22 23:24:20.732016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.732052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.739540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.739731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.739770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.747645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.747755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.747791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.755753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.755906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.755943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.763497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.763593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.763635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.771203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.771324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.771364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.778968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.779077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.779113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.786604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.786707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.786750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.795106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.795274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.795322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.803394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.803527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.803565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.811666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.811781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.811817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.820027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.820183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.820221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.827916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.828021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.828057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.835618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.835736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.835772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.843444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.843554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.843590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.851417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.851516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.851552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.859468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.859568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.859605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.867838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.867986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.868024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.875716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.875885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.580 [2024-07-22 23:24:20.884394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.580 [2024-07-22 23:24:20.884492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.580 [2024-07-22 23:24:20.884527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.892813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.892922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.892959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.902618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.902727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.910842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.910986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.911025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.918960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.919060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.919095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.927195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.927364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.927402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.936226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.936399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.936436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.946011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.946149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.946188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.955560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.955776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.955814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.964706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.964937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.964976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.973753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.973938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.973975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.982371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.982520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.982558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.990953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:20.991206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:20.991245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:20.999912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.000152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.000191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.008904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.009139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.009185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.017623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.017823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.017861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.026307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.026472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.026510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.035606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.035730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.035766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.045283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.045458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.045496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.055857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.056046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.056085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.066399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.066588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.075048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.075173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.075215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.084536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.084667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.084705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.841 [2024-07-22 23:24:21.093761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.841 [2024-07-22 23:24:21.093882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.841 [2024-07-22 23:24:21.093918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.842 [2024-07-22 23:24:21.103027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.842 [2024-07-22 23:24:21.103191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.842 [2024-07-22 23:24:21.103229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.842 [2024-07-22 23:24:21.112114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.842 [2024-07-22 23:24:21.112221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.842 [2024-07-22 23:24:21.112257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:44.842 [2024-07-22 23:24:21.121084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.842 [2024-07-22 23:24:21.121212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.842 [2024-07-22 23:24:21.121251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:44.842 [2024-07-22 23:24:21.129943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.842 [2024-07-22 23:24:21.130050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.842 [2024-07-22 23:24:21.130086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:44.842 [2024-07-22 23:24:21.138738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.842 [2024-07-22 23:24:21.138844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.842 [2024-07-22 23:24:21.138880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:44.842 [2024-07-22 23:24:21.147410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:44.842 [2024-07-22 23:24:21.147513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:44.842 [2024-07-22 23:24:21.147549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:45.102 [2024-07-22 23:24:21.155685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd77e30) with pdu=0x2000190fef90 00:43:45.102 [2024-07-22 23:24:21.155800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:45.102 [2024-07-22 23:24:21.155837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:45.102 00:43:45.102 Latency(us) 00:43:45.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:45.102 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:43:45.102 nvme0n1 : 2.00 3742.78 467.85 0.00 0.00 4264.16 3203.98 11116.85 00:43:45.102 =================================================================================================================== 00:43:45.102 Total : 3742.78 467.85 0.00 0.00 4264.16 3203.98 11116.85 00:43:45.102 0 00:43:45.102 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:45.102 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:45.102 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:45.102 | .driver_specific 00:43:45.102 | .nvme_error 00:43:45.102 | .status_code 00:43:45.102 | .command_transient_transport_error' 00:43:45.102 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 241 > 0 )) 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1063602 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1063602 ']' 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1063602 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1063602 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1063602' 00:43:45.672 killing process with pid 1063602 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1063602 00:43:45.672 Received shutdown signal, test time was about 2.000000 seconds 00:43:45.672 00:43:45.672 Latency(us) 00:43:45.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:45.672 =================================================================================================================== 00:43:45.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:45.672 23:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1063602 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1061442 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1061442 ']' 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1061442 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1061442 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1061442' 00:43:45.932 killing process with pid 1061442 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1061442 00:43:45.932 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1061442 00:43:46.503 00:43:46.503 real 0m20.426s 00:43:46.503 user 0m41.944s 00:43:46.503 sys 0m5.933s 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:46.503 ************************************ 00:43:46.503 END TEST nvmf_digest_error 00:43:46.503 ************************************ 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:46.503 rmmod nvme_tcp 00:43:46.503 rmmod nvme_fabrics 00:43:46.503 rmmod nvme_keyring 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1061442 ']' 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1061442 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1061442 ']' 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1061442 00:43:46.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1061442) - No such process 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1061442 is not found' 00:43:46.503 Process with pid 1061442 is not found 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:46.503 23:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:48.416 23:24:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:48.416 00:43:48.416 real 0m46.124s 00:43:48.416 user 1m23.789s 00:43:48.416 sys 0m14.023s 00:43:48.416 23:24:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:48.416 23:24:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:43:48.416 ************************************ 00:43:48.416 END TEST nvmf_digest 00:43:48.416 ************************************ 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:48.676 ************************************ 00:43:48.676 START TEST nvmf_bdevperf 00:43:48.676 ************************************ 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:43:48.676 * Looking for test storage... 00:43:48.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:48.676 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:43:48.677 23:24:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:51.999 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:51.999 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:51.999 Found net devices under 0000:84:00.0: cvl_0_0 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:51.999 Found net devices under 0000:84:00.1: cvl_0_1 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:51.999 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:52.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:52.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:43:52.000 00:43:52.000 --- 10.0.0.2 ping statistics --- 00:43:52.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.000 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:52.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:52.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:43:52.000 00:43:52.000 --- 10.0.0.1 ping statistics --- 00:43:52.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.000 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1066314 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1066314 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1066314 ']' 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:52.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:52.000 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.260 [2024-07-22 23:24:28.365798] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:52.260 [2024-07-22 23:24:28.365935] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:52.260 EAL: No free 2048 kB hugepages reported on node 1 00:43:52.260 [2024-07-22 23:24:28.473127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:52.520 [2024-07-22 23:24:28.585247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:52.520 [2024-07-22 23:24:28.585324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:52.520 [2024-07-22 23:24:28.585348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:52.520 [2024-07-22 23:24:28.585365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:52.520 [2024-07-22 23:24:28.585379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:52.520 [2024-07-22 23:24:28.585477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:52.520 [2024-07-22 23:24:28.585522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:43:52.520 [2024-07-22 23:24:28.585525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.520 [2024-07-22 23:24:28.765692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:52.520 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.521 Malloc0 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.521 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:52.521 [2024-07-22 23:24:28.829719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:52.781 { 00:43:52.781 "params": { 00:43:52.781 "name": "Nvme$subsystem", 00:43:52.781 "trtype": "$TEST_TRANSPORT", 00:43:52.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:52.781 "adrfam": "ipv4", 00:43:52.781 "trsvcid": "$NVMF_PORT", 00:43:52.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:52.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:52.781 "hdgst": ${hdgst:-false}, 00:43:52.781 "ddgst": ${ddgst:-false} 00:43:52.781 }, 00:43:52.781 "method": "bdev_nvme_attach_controller" 00:43:52.781 } 00:43:52.781 EOF 00:43:52.781 )") 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:43:52.781 23:24:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:52.781 "params": { 00:43:52.781 "name": "Nvme1", 00:43:52.781 "trtype": "tcp", 00:43:52.781 "traddr": "10.0.0.2", 00:43:52.781 "adrfam": "ipv4", 00:43:52.781 "trsvcid": "4420", 00:43:52.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:52.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:52.781 "hdgst": false, 00:43:52.781 "ddgst": false 00:43:52.781 }, 00:43:52.781 "method": "bdev_nvme_attach_controller" 00:43:52.781 }' 00:43:52.781 [2024-07-22 23:24:28.880936] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:52.781 [2024-07-22 23:24:28.881029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066369 ] 00:43:52.782 EAL: No free 2048 kB hugepages reported on node 1 00:43:52.782 [2024-07-22 23:24:28.958143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:52.782 [2024-07-22 23:24:29.067430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.352 Running I/O for 1 seconds... 00:43:54.292 00:43:54.292 Latency(us) 00:43:54.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:54.292 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:54.292 Verification LBA range: start 0x0 length 0x4000 00:43:54.292 Nvme1n1 : 1.01 6521.20 25.47 0.00 0.00 19532.32 1171.15 16214.09 00:43:54.292 =================================================================================================================== 00:43:54.292 Total : 6521.20 25.47 0.00 0.00 19532.32 1171.15 16214.09 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1066545 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:54.552 { 00:43:54.552 "params": { 00:43:54.552 "name": "Nvme$subsystem", 00:43:54.552 "trtype": "$TEST_TRANSPORT", 00:43:54.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:54.552 "adrfam": "ipv4", 00:43:54.552 "trsvcid": "$NVMF_PORT", 00:43:54.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:54.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:54.552 "hdgst": ${hdgst:-false}, 00:43:54.552 "ddgst": ${ddgst:-false} 00:43:54.552 }, 00:43:54.552 "method": "bdev_nvme_attach_controller" 00:43:54.552 } 00:43:54.552 EOF 00:43:54.552 )") 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:43:54.552 23:24:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:54.552 "params": { 00:43:54.552 "name": "Nvme1", 00:43:54.552 "trtype": "tcp", 00:43:54.552 "traddr": "10.0.0.2", 00:43:54.552 "adrfam": "ipv4", 00:43:54.552 "trsvcid": "4420", 00:43:54.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:54.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:54.552 "hdgst": false, 00:43:54.552 "ddgst": false 00:43:54.552 }, 00:43:54.552 "method": "bdev_nvme_attach_controller" 00:43:54.552 }' 00:43:54.552 [2024-07-22 23:24:30.759230] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:43:54.552 [2024-07-22 23:24:30.759424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066545 ] 00:43:54.552 EAL: No free 2048 kB hugepages reported on node 1 00:43:54.812 [2024-07-22 23:24:30.865701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:54.812 [2024-07-22 23:24:30.973296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:55.071 Running I/O for 15 seconds... 00:43:57.614 23:24:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1066314 00:43:57.614 23:24:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:43:57.614 [2024-07-22 23:24:33.695194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.614 [2024-07-22 23:24:33.695624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.614 [2024-07-22 23:24:33.695643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.695704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.615 [2024-07-22 23:24:33.695745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.695783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.695822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.695860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.695899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.695937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.695957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.695998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.696932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.696965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.615 [2024-07-22 23:24:33.697843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.615 [2024-07-22 23:24:33.697878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.697915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.697948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.697985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.616 [2024-07-22 23:24:33.698749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.698833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.698903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.698939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.698973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.699943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.699979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.700012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.700049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.700083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.700121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.616 [2024-07-22 23:24:33.700153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.616 [2024-07-22 23:24:33.700210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:57.617 [2024-07-22 23:24:33.700832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.700937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.700980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.701952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.701988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.617 [2024-07-22 23:24:33.702559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.617 [2024-07-22 23:24:33.702612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.618 [2024-07-22 23:24:33.702645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.702691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.618 [2024-07-22 23:24:33.702726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.702764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.618 [2024-07-22 23:24:33.702796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.702833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.618 [2024-07-22 23:24:33.702866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.702902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:57.618 [2024-07-22 23:24:33.702936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.702970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd420 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.703010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:57.618 [2024-07-22 23:24:33.703037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:57.618 [2024-07-22 23:24:33.703065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1560 len:8 PRP1 0x0 PRP2 0x0 00:43:57.618 [2024-07-22 23:24:33.703096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.703220] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18fd420 was disconnected and freed. reset controller. 00:43:57.618 [2024-07-22 23:24:33.703390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:57.618 [2024-07-22 23:24:33.703420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.703441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:57.618 [2024-07-22 23:24:33.703466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.703486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:57.618 [2024-07-22 23:24:33.703503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.703522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:57.618 [2024-07-22 23:24:33.703539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:57.618 [2024-07-22 23:24:33.703555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.709976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.618 [2024-07-22 23:24:33.710070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.711371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.618 [2024-07-22 23:24:33.711411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.618 [2024-07-22 23:24:33.711439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.711898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.712431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.618 [2024-07-22 23:24:33.712462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.618 [2024-07-22 23:24:33.712483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.618 [2024-07-22 23:24:33.719769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.618 [2024-07-22 23:24:33.728335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.618 [2024-07-22 23:24:33.729014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.618 [2024-07-22 23:24:33.729084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.618 [2024-07-22 23:24:33.729124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.729563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.730108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.618 [2024-07-22 23:24:33.730161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.618 [2024-07-22 23:24:33.730195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.618 [2024-07-22 23:24:33.737861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.618 [2024-07-22 23:24:33.747337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.618 [2024-07-22 23:24:33.748178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.618 [2024-07-22 23:24:33.748246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.618 [2024-07-22 23:24:33.748285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.748710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.749261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.618 [2024-07-22 23:24:33.749326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.618 [2024-07-22 23:24:33.749366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.618 [2024-07-22 23:24:33.756464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.618 [2024-07-22 23:24:33.763789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.618 [2024-07-22 23:24:33.764574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.618 [2024-07-22 23:24:33.764642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.618 [2024-07-22 23:24:33.764682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.765219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.765600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.618 [2024-07-22 23:24:33.765677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.618 [2024-07-22 23:24:33.765713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.618 [2024-07-22 23:24:33.772579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.618 [2024-07-22 23:24:33.781301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.618 [2024-07-22 23:24:33.782020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.618 [2024-07-22 23:24:33.782088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.618 [2024-07-22 23:24:33.782127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.782550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.783041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.618 [2024-07-22 23:24:33.783094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.618 [2024-07-22 23:24:33.783127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.618 [2024-07-22 23:24:33.790185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.618 [2024-07-22 23:24:33.798944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.618 [2024-07-22 23:24:33.799734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.618 [2024-07-22 23:24:33.799803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.618 [2024-07-22 23:24:33.799842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.800394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.800799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.618 [2024-07-22 23:24:33.800852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.618 [2024-07-22 23:24:33.800886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.618 [2024-07-22 23:24:33.807857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.618 [2024-07-22 23:24:33.816574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.618 [2024-07-22 23:24:33.817401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.618 [2024-07-22 23:24:33.817439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.618 [2024-07-22 23:24:33.817461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.618 [2024-07-22 23:24:33.817906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.618 [2024-07-22 23:24:33.818427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.618 [2024-07-22 23:24:33.818457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.618 [2024-07-22 23:24:33.818476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.619 [2024-07-22 23:24:33.825532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.619 [2024-07-22 23:24:33.834376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.619 [2024-07-22 23:24:33.835038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.619 [2024-07-22 23:24:33.835105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.619 [2024-07-22 23:24:33.835144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.619 [2024-07-22 23:24:33.835573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.619 [2024-07-22 23:24:33.836124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.619 [2024-07-22 23:24:33.836176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.619 [2024-07-22 23:24:33.836209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.619 [2024-07-22 23:24:33.843273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.619 [2024-07-22 23:24:33.852094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.619 [2024-07-22 23:24:33.852779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.619 [2024-07-22 23:24:33.852847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.619 [2024-07-22 23:24:33.852886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.619 [2024-07-22 23:24:33.853429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.619 [2024-07-22 23:24:33.853865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.619 [2024-07-22 23:24:33.853918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.619 [2024-07-22 23:24:33.853952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.619 [2024-07-22 23:24:33.861118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.619 [2024-07-22 23:24:33.869998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.619 [2024-07-22 23:24:33.870713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.619 [2024-07-22 23:24:33.870783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.619 [2024-07-22 23:24:33.870821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.619 [2024-07-22 23:24:33.871400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.619 [2024-07-22 23:24:33.871809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.619 [2024-07-22 23:24:33.871862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.619 [2024-07-22 23:24:33.871896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.619 [2024-07-22 23:24:33.879040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.619 [2024-07-22 23:24:33.888908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.619 [2024-07-22 23:24:33.889699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.619 [2024-07-22 23:24:33.889768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.619 [2024-07-22 23:24:33.889817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.619 [2024-07-22 23:24:33.890384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.619 [2024-07-22 23:24:33.890813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.619 [2024-07-22 23:24:33.890866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.619 [2024-07-22 23:24:33.890900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.619 [2024-07-22 23:24:33.897943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.619 [2024-07-22 23:24:33.907204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.619 [2024-07-22 23:24:33.907917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.619 [2024-07-22 23:24:33.907985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.619 [2024-07-22 23:24:33.908024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.619 [2024-07-22 23:24:33.908508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.619 [2024-07-22 23:24:33.909003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.619 [2024-07-22 23:24:33.909055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.619 [2024-07-22 23:24:33.909089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.619 [2024-07-22 23:24:33.916281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.881 [2024-07-22 23:24:33.924661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.881 [2024-07-22 23:24:33.925466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.881 [2024-07-22 23:24:33.925504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.881 [2024-07-22 23:24:33.925525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.881 [2024-07-22 23:24:33.926056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.881 [2024-07-22 23:24:33.926515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.881 [2024-07-22 23:24:33.926545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.881 [2024-07-22 23:24:33.926564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.881 [2024-07-22 23:24:33.934036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.881 [2024-07-22 23:24:33.942489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.881 [2024-07-22 23:24:33.943326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.881 [2024-07-22 23:24:33.943386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.881 [2024-07-22 23:24:33.943407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.881 [2024-07-22 23:24:33.943817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.881 [2024-07-22 23:24:33.944384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.881 [2024-07-22 23:24:33.944421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.881 [2024-07-22 23:24:33.944440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.881 [2024-07-22 23:24:33.952091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.881 [2024-07-22 23:24:33.960495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.881 [2024-07-22 23:24:33.961332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.881 [2024-07-22 23:24:33.961388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.881 [2024-07-22 23:24:33.961409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.881 [2024-07-22 23:24:33.961811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.881 [2024-07-22 23:24:33.962382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.881 [2024-07-22 23:24:33.962435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.881 [2024-07-22 23:24:33.962468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.881 [2024-07-22 23:24:33.970636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.881 [2024-07-22 23:24:33.978510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.881 [2024-07-22 23:24:33.979293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.881 [2024-07-22 23:24:33.979388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.881 [2024-07-22 23:24:33.979410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.881 [2024-07-22 23:24:33.979819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.881 [2024-07-22 23:24:33.980382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.881 [2024-07-22 23:24:33.980411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.881 [2024-07-22 23:24:33.980429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.881 [2024-07-22 23:24:33.987454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.881 [2024-07-22 23:24:33.996212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.881 [2024-07-22 23:24:33.996873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.881 [2024-07-22 23:24:33.996941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.881 [2024-07-22 23:24:33.996981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.881 [2024-07-22 23:24:33.997484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.881 [2024-07-22 23:24:33.997964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.881 [2024-07-22 23:24:33.998016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.881 [2024-07-22 23:24:33.998049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.881 [2024-07-22 23:24:34.005086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.881 [2024-07-22 23:24:34.013790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.881 [2024-07-22 23:24:34.014587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.881 [2024-07-22 23:24:34.014657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.881 [2024-07-22 23:24:34.014695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.881 [2024-07-22 23:24:34.015232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.881 [2024-07-22 23:24:34.015670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.881 [2024-07-22 23:24:34.015725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.881 [2024-07-22 23:24:34.015759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.881 [2024-07-22 23:24:34.022558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.881 [2024-07-22 23:24:34.031212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.881 [2024-07-22 23:24:34.031917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.881 [2024-07-22 23:24:34.031987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.881 [2024-07-22 23:24:34.032025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.881 [2024-07-22 23:24:34.032492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.881 [2024-07-22 23:24:34.033001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.881 [2024-07-22 23:24:34.033054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.881 [2024-07-22 23:24:34.033088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.881 [2024-07-22 23:24:34.040187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.048883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.049703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.049771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.049810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.050374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.050770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.050822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.050856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.057922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.066762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.067613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.067682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.067722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.068272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.068677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.068731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.068765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.075848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.084545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.085360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.085399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.085420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.085861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.086417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.086446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.086465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.093501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.102380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.103114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.103182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.103221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.103617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.104169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.104219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.104252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.111300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.120144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.120845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.120914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.120953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.121455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.121913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.121966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.122013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.129126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.137898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.138710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.138779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.138817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.139379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.139763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.139815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.139849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.146922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.155663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.156461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.156499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.156520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.157042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.157506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.157535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.157554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.164631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.173454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:57.882 [2024-07-22 23:24:34.174184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.882 [2024-07-22 23:24:34.174253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:57.882 [2024-07-22 23:24:34.174292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:57.882 [2024-07-22 23:24:34.174689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:57.882 [2024-07-22 23:24:34.175238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:57.882 [2024-07-22 23:24:34.175289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:57.882 [2024-07-22 23:24:34.175356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:57.882 [2024-07-22 23:24:34.182445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:57.882 [2024-07-22 23:24:34.191050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.191708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.191792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.191833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.192384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.192791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.192844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.192878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.199947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.208692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.209546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.209583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.209604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.210148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.210581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.210636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.210671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.217744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.226491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.227288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.227370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.227392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.227767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.228349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.228378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.228396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.235448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.244242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.244995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.245064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.245103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.245534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.246053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.246106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.246140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.253245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.262003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.262690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.262759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.262797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.263373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.263742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.263793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.263826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.270410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.280038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.280766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.280836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.280875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.281410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.281797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.281852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.281885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.288938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.297645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.298419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.298458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.298480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.298939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.299451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.299481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.299501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.306513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.315306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.316048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.143 [2024-07-22 23:24:34.316117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.143 [2024-07-22 23:24:34.316157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.143 [2024-07-22 23:24:34.316564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.143 [2024-07-22 23:24:34.317098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.143 [2024-07-22 23:24:34.317151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.143 [2024-07-22 23:24:34.317184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.143 [2024-07-22 23:24:34.324142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.143 [2024-07-22 23:24:34.332063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.143 [2024-07-22 23:24:34.332720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.144 [2024-07-22 23:24:34.332773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.144 [2024-07-22 23:24:34.332802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.144 [2024-07-22 23:24:34.333206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.144 [2024-07-22 23:24:34.333577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.144 [2024-07-22 23:24:34.333632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.144 [2024-07-22 23:24:34.333658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.144 [2024-07-22 23:24:34.340111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.144 [2024-07-22 23:24:34.349628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.144 [2024-07-22 23:24:34.350436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.144 [2024-07-22 23:24:34.350474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.144 [2024-07-22 23:24:34.350496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.144 [2024-07-22 23:24:34.351001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.144 [2024-07-22 23:24:34.351481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.144 [2024-07-22 23:24:34.351510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.144 [2024-07-22 23:24:34.351529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.144 [2024-07-22 23:24:34.358559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.144 [2024-07-22 23:24:34.367305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.144 [2024-07-22 23:24:34.367976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.144 [2024-07-22 23:24:34.368045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.144 [2024-07-22 23:24:34.368098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.144 [2024-07-22 23:24:34.368528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.144 [2024-07-22 23:24:34.369035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.144 [2024-07-22 23:24:34.369089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.144 [2024-07-22 23:24:34.369122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.144 [2024-07-22 23:24:34.376259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.144 [2024-07-22 23:24:34.384937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.144 [2024-07-22 23:24:34.385575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.144 [2024-07-22 23:24:34.385613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.144 [2024-07-22 23:24:34.385653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.144 [2024-07-22 23:24:34.386191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.144 [2024-07-22 23:24:34.386605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.144 [2024-07-22 23:24:34.386659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.144 [2024-07-22 23:24:34.386694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.144 [2024-07-22 23:24:34.393692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.144 [2024-07-22 23:24:34.402895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.144 [2024-07-22 23:24:34.403542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.144 [2024-07-22 23:24:34.403590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.144 [2024-07-22 23:24:34.403611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.144 [2024-07-22 23:24:34.404137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.144 [2024-07-22 23:24:34.404557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.144 [2024-07-22 23:24:34.404587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.144 [2024-07-22 23:24:34.404636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.144 [2024-07-22 23:24:34.411635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.144 [2024-07-22 23:24:34.420727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.144 [2024-07-22 23:24:34.421473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.144 [2024-07-22 23:24:34.421511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.144 [2024-07-22 23:24:34.421532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.144 [2024-07-22 23:24:34.422006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.144 [2024-07-22 23:24:34.422485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.144 [2024-07-22 23:24:34.422522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.144 [2024-07-22 23:24:34.422542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.144 [2024-07-22 23:24:34.429542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.144 [2024-07-22 23:24:34.438382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.144 [2024-07-22 23:24:34.439059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.144 [2024-07-22 23:24:34.439128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.144 [2024-07-22 23:24:34.439167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.144 [2024-07-22 23:24:34.439567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.144 [2024-07-22 23:24:34.440086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.144 [2024-07-22 23:24:34.440140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.144 [2024-07-22 23:24:34.440174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.144 [2024-07-22 23:24:34.447196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.405 [2024-07-22 23:24:34.455459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.456123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.456191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.456230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.456651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.457200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.457251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.457285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.464206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.473392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.474063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.474131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.474169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.474567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.475103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.475155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.475189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.482225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.490903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.491559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.491616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.491656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.492193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.492583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.492628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.492664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.499671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.508443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.509186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.509254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.509293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.509654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.510205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.510256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.510289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.517232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.526145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.526829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.526898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.526936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.527442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.527872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.527924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.527958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.535038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.543688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.544550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.544588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.544609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.545173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.545602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.545657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.545690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.552681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.561458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.562136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.562213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.562252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.562678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.563229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.563281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.563330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.570295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.579438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.580153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.580222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.580261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.580683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.581234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.581285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.581336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.588371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.597022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.597755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.597824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.597862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.598404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.598807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.598861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.598907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.605990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.614723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.406 [2024-07-22 23:24:34.615582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.406 [2024-07-22 23:24:34.615640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.406 [2024-07-22 23:24:34.615680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.406 [2024-07-22 23:24:34.616217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.406 [2024-07-22 23:24:34.616647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.406 [2024-07-22 23:24:34.616702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.406 [2024-07-22 23:24:34.616737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.406 [2024-07-22 23:24:34.623814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.406 [2024-07-22 23:24:34.632567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.407 [2024-07-22 23:24:34.633394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.407 [2024-07-22 23:24:34.633432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.407 [2024-07-22 23:24:34.633454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.407 [2024-07-22 23:24:34.633864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.407 [2024-07-22 23:24:34.634406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.407 [2024-07-22 23:24:34.634435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.407 [2024-07-22 23:24:34.634454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.407 [2024-07-22 23:24:34.641504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.407 [2024-07-22 23:24:34.650359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.407 [2024-07-22 23:24:34.651026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.407 [2024-07-22 23:24:34.651095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.407 [2024-07-22 23:24:34.651133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.407 [2024-07-22 23:24:34.651551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.407 [2024-07-22 23:24:34.652088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.407 [2024-07-22 23:24:34.652140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.407 [2024-07-22 23:24:34.652173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.407 [2024-07-22 23:24:34.659173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.407 [2024-07-22 23:24:34.667889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.407 [2024-07-22 23:24:34.668682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.407 [2024-07-22 23:24:34.668752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.407 [2024-07-22 23:24:34.668790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.407 [2024-07-22 23:24:34.669349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.407 [2024-07-22 23:24:34.669729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.407 [2024-07-22 23:24:34.669782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.407 [2024-07-22 23:24:34.669815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.407 [2024-07-22 23:24:34.676899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.407 [2024-07-22 23:24:34.685562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.407 [2024-07-22 23:24:34.686394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.407 [2024-07-22 23:24:34.686431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.407 [2024-07-22 23:24:34.686452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.407 [2024-07-22 23:24:34.686921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.407 [2024-07-22 23:24:34.687442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.407 [2024-07-22 23:24:34.687471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.407 [2024-07-22 23:24:34.687490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.407 [2024-07-22 23:24:34.694523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.407 [2024-07-22 23:24:34.703381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.407 [2024-07-22 23:24:34.704064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.407 [2024-07-22 23:24:34.704132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.407 [2024-07-22 23:24:34.704171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.407 [2024-07-22 23:24:34.704569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.407 [2024-07-22 23:24:34.705128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.407 [2024-07-22 23:24:34.705179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.407 [2024-07-22 23:24:34.705212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.407 [2024-07-22 23:24:34.711994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.669 [2024-07-22 23:24:34.720742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.669 [2024-07-22 23:24:34.721386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.669 [2024-07-22 23:24:34.721423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.669 [2024-07-22 23:24:34.721445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.669 [2024-07-22 23:24:34.721922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.669 [2024-07-22 23:24:34.722443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.669 [2024-07-22 23:24:34.722473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.669 [2024-07-22 23:24:34.722491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.669 [2024-07-22 23:24:34.729954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.669 [2024-07-22 23:24:34.738360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.669 [2024-07-22 23:24:34.739028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.669 [2024-07-22 23:24:34.739097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.669 [2024-07-22 23:24:34.739136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.669 [2024-07-22 23:24:34.739551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.669 [2024-07-22 23:24:34.740114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.669 [2024-07-22 23:24:34.740166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.669 [2024-07-22 23:24:34.740199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.669 [2024-07-22 23:24:34.747321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.669 [2024-07-22 23:24:34.756005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.669 [2024-07-22 23:24:34.756740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.669 [2024-07-22 23:24:34.756809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.669 [2024-07-22 23:24:34.756847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.669 [2024-07-22 23:24:34.757394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.669 [2024-07-22 23:24:34.757832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.669 [2024-07-22 23:24:34.757886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.669 [2024-07-22 23:24:34.757920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.669 [2024-07-22 23:24:34.764994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.669 [2024-07-22 23:24:34.773731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.669 [2024-07-22 23:24:34.774591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.669 [2024-07-22 23:24:34.774659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.669 [2024-07-22 23:24:34.774697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.669 [2024-07-22 23:24:34.775233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.669 [2024-07-22 23:24:34.775644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.669 [2024-07-22 23:24:34.775698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.669 [2024-07-22 23:24:34.775744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.669 [2024-07-22 23:24:34.782549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.669 [2024-07-22 23:24:34.791276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.669 [2024-07-22 23:24:34.791947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.669 [2024-07-22 23:24:34.792015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.669 [2024-07-22 23:24:34.792055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.669 [2024-07-22 23:24:34.792507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.669 [2024-07-22 23:24:34.793001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.669 [2024-07-22 23:24:34.793053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.669 [2024-07-22 23:24:34.793086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.800141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.808799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.809610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.809694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.809733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.810269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.810678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.810731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.810764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.817824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.826571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.827417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.827455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.827476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.827946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.828454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.828484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.828503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.835557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.844415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.845125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.845203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.845244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.845608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.846164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.846216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.846250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.853258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.861935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.862736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.862806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.862845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.863393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.863775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.863827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.863860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.870940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.879704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.880537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.880574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.880614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.881152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.881565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.881595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.881632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.888686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.897477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.898238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.898306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.898383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.898756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.899352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.899381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.899399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.906443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.915275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.915946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.916013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.916052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.916522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.917032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.917086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.917119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.924144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.932864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.933624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.933692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.933731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.934269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.934703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.934758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.934791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.941821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.950529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.951360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.951398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.951419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.951860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.952424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.952453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.952472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.959485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.670 [2024-07-22 23:24:34.968302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.670 [2024-07-22 23:24:34.968957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.670 [2024-07-22 23:24:34.969025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.670 [2024-07-22 23:24:34.969064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.670 [2024-07-22 23:24:34.969509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.670 [2024-07-22 23:24:34.970018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.670 [2024-07-22 23:24:34.970071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.670 [2024-07-22 23:24:34.970105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.670 [2024-07-22 23:24:34.976942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.931 [2024-07-22 23:24:34.986231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.931 [2024-07-22 23:24:34.986871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.931 [2024-07-22 23:24:34.986940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.931 [2024-07-22 23:24:34.986979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.931 [2024-07-22 23:24:34.987463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.931 [2024-07-22 23:24:34.987949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.931 [2024-07-22 23:24:34.988003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.931 [2024-07-22 23:24:34.988037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.931 [2024-07-22 23:24:34.995054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.931 [2024-07-22 23:24:35.003777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.931 [2024-07-22 23:24:35.004608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.931 [2024-07-22 23:24:35.004677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.931 [2024-07-22 23:24:35.004715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.931 [2024-07-22 23:24:35.005251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.931 [2024-07-22 23:24:35.005657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.931 [2024-07-22 23:24:35.005712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.931 [2024-07-22 23:24:35.005746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.931 [2024-07-22 23:24:35.012833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.931 [2024-07-22 23:24:35.021650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.931 [2024-07-22 23:24:35.022438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.931 [2024-07-22 23:24:35.022476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.931 [2024-07-22 23:24:35.022505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.931 [2024-07-22 23:24:35.023018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.931 [2024-07-22 23:24:35.023492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.931 [2024-07-22 23:24:35.023522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.931 [2024-07-22 23:24:35.023540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.931 [2024-07-22 23:24:35.030574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.931 [2024-07-22 23:24:35.038714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.931 [2024-07-22 23:24:35.039406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.931 [2024-07-22 23:24:35.039445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.931 [2024-07-22 23:24:35.039467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.931 [2024-07-22 23:24:35.039908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.931 [2024-07-22 23:24:35.040429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.931 [2024-07-22 23:24:35.040459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.931 [2024-07-22 23:24:35.040477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.931 [2024-07-22 23:24:35.047403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.931 [2024-07-22 23:24:35.056507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.931 [2024-07-22 23:24:35.057265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.931 [2024-07-22 23:24:35.057360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.931 [2024-07-22 23:24:35.057385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.932 [2024-07-22 23:24:35.057764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.932 [2024-07-22 23:24:35.058327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.932 [2024-07-22 23:24:35.058376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.932 [2024-07-22 23:24:35.058395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.932 [2024-07-22 23:24:35.065439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.932 [2024-07-22 23:24:35.074168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.932 [2024-07-22 23:24:35.074826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.932 [2024-07-22 23:24:35.074896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.932 [2024-07-22 23:24:35.074936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.932 [2024-07-22 23:24:35.075438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.932 [2024-07-22 23:24:35.075876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.932 [2024-07-22 23:24:35.075942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.932 [2024-07-22 23:24:35.075978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.932 [2024-07-22 23:24:35.083023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.932 [2024-07-22 23:24:35.091709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.932 [2024-07-22 23:24:35.092503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.932 [2024-07-22 23:24:35.092541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.932 [2024-07-22 23:24:35.092563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.932 [2024-07-22 23:24:35.093071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.932 [2024-07-22 23:24:35.093520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.932 [2024-07-22 23:24:35.093550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.932 [2024-07-22 23:24:35.093568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.932 [2024-07-22 23:24:35.100659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.932 [2024-07-22 23:24:35.109496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.932 [2024-07-22 23:24:35.110326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.932 [2024-07-22 23:24:35.110384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.932 [2024-07-22 23:24:35.110405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.932 [2024-07-22 23:24:35.110792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.932 [2024-07-22 23:24:35.111360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.932 [2024-07-22 23:24:35.111389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.932 [2024-07-22 23:24:35.111408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.932 [2024-07-22 23:24:35.118454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.932 [2024-07-22 23:24:35.127283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.932 [2024-07-22 23:24:35.127974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.932 [2024-07-22 23:24:35.128042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.932 [2024-07-22 23:24:35.128080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.932 [2024-07-22 23:24:35.128517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.932 [2024-07-22 23:24:35.129010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.932 [2024-07-22 23:24:35.129062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.932 [2024-07-22 23:24:35.129096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.932 [2024-07-22 23:24:35.136157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.932 [2024-07-22 23:24:35.144862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.932 [2024-07-22 23:24:35.145659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.932 [2024-07-22 23:24:35.145728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.932 [2024-07-22 23:24:35.145766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.932 [2024-07-22 23:24:35.146303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.932 [2024-07-22 23:24:35.146707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.932 [2024-07-22 23:24:35.146761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.932 [2024-07-22 23:24:35.146795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.932 [2024-07-22 23:24:35.153922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.932 [2024-07-22 23:24:35.162682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.932 [2024-07-22 23:24:35.163524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.932 [2024-07-22 23:24:35.163583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.932 [2024-07-22 23:24:35.163623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.932 [2024-07-22 23:24:35.164161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.932 [2024-07-22 23:24:35.164571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.933 [2024-07-22 23:24:35.164601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.933 [2024-07-22 23:24:35.164620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.933 [2024-07-22 23:24:35.171723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.933 [2024-07-22 23:24:35.180494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.933 [2024-07-22 23:24:35.181332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.933 [2024-07-22 23:24:35.181389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.933 [2024-07-22 23:24:35.181411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.933 [2024-07-22 23:24:35.181807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.933 [2024-07-22 23:24:35.182376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.933 [2024-07-22 23:24:35.182405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.933 [2024-07-22 23:24:35.182424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.933 [2024-07-22 23:24:35.189437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.933 [2024-07-22 23:24:35.198175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.933 [2024-07-22 23:24:35.198845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.933 [2024-07-22 23:24:35.198914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.933 [2024-07-22 23:24:35.198953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.933 [2024-07-22 23:24:35.199460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.933 [2024-07-22 23:24:35.199942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.933 [2024-07-22 23:24:35.199995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.933 [2024-07-22 23:24:35.200028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.933 [2024-07-22 23:24:35.207104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.933 [2024-07-22 23:24:35.215857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.933 [2024-07-22 23:24:35.216655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.933 [2024-07-22 23:24:35.216725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.933 [2024-07-22 23:24:35.216764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.933 [2024-07-22 23:24:35.217300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.933 [2024-07-22 23:24:35.217717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.933 [2024-07-22 23:24:35.217772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.933 [2024-07-22 23:24:35.217805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:58.933 [2024-07-22 23:24:35.224856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:58.933 [2024-07-22 23:24:35.233584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:58.933 [2024-07-22 23:24:35.234416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:58.933 [2024-07-22 23:24:35.234454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:58.933 [2024-07-22 23:24:35.234474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:58.933 [2024-07-22 23:24:35.234962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:58.933 [2024-07-22 23:24:35.235464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:58.933 [2024-07-22 23:24:35.235494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:58.933 [2024-07-22 23:24:35.235512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.195 [2024-07-22 23:24:35.242093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.195 [2024-07-22 23:24:35.251232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.195 [2024-07-22 23:24:35.251924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.195 [2024-07-22 23:24:35.251993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.195 [2024-07-22 23:24:35.252032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.195 [2024-07-22 23:24:35.252494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.195 [2024-07-22 23:24:35.253020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.195 [2024-07-22 23:24:35.253072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.195 [2024-07-22 23:24:35.253118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.195 [2024-07-22 23:24:35.260163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.195 [2024-07-22 23:24:35.268792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.195 [2024-07-22 23:24:35.269601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.195 [2024-07-22 23:24:35.269678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.195 [2024-07-22 23:24:35.269717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.195 [2024-07-22 23:24:35.270254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.195 [2024-07-22 23:24:35.270686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.195 [2024-07-22 23:24:35.270740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.195 [2024-07-22 23:24:35.270773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.195 [2024-07-22 23:24:35.277787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.195 [2024-07-22 23:24:35.286895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.195 [2024-07-22 23:24:35.287685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.195 [2024-07-22 23:24:35.287754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.195 [2024-07-22 23:24:35.287793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.195 [2024-07-22 23:24:35.288351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.195 [2024-07-22 23:24:35.288759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.195 [2024-07-22 23:24:35.288812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.196 [2024-07-22 23:24:35.288845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.196 [2024-07-22 23:24:35.295371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.196 [2024-07-22 23:24:35.304528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.196 [2024-07-22 23:24:35.305339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.196 [2024-07-22 23:24:35.305395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.196 [2024-07-22 23:24:35.305416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.196 [2024-07-22 23:24:35.305849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.196 [2024-07-22 23:24:35.306397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.196 [2024-07-22 23:24:35.306426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.196 [2024-07-22 23:24:35.306444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.196 [2024-07-22 23:24:35.313469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.196 [2024-07-22 23:24:35.322249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.196 [2024-07-22 23:24:35.322946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.196 [2024-07-22 23:24:35.323014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.196 [2024-07-22 23:24:35.323051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.196 [2024-07-22 23:24:35.323508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.196 [2024-07-22 23:24:35.323992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.196 [2024-07-22 23:24:35.324045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.196 [2024-07-22 23:24:35.324078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.196 [2024-07-22 23:24:35.331076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.196 [2024-07-22 23:24:35.339766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.196 [2024-07-22 23:24:35.340565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.196 [2024-07-22 23:24:35.340638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.196 [2024-07-22 23:24:35.340677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.196 [2024-07-22 23:24:35.341215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.196 [2024-07-22 23:24:35.341662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.196 [2024-07-22 23:24:35.341718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.196 [2024-07-22 23:24:35.341751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.196 [2024-07-22 23:24:35.348732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.196 [2024-07-22 23:24:35.357479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.196 [2024-07-22 23:24:35.358192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.196 [2024-07-22 23:24:35.358261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.196 [2024-07-22 23:24:35.358298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.196 [2024-07-22 23:24:35.358687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.196 [2024-07-22 23:24:35.359236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.196 [2024-07-22 23:24:35.359287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.196 [2024-07-22 23:24:35.359352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.196 [2024-07-22 23:24:35.366671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.196 [2024-07-22 23:24:35.376295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.196 [2024-07-22 23:24:35.377119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.196 [2024-07-22 23:24:35.377189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.196 [2024-07-22 23:24:35.377227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.196 [2024-07-22 23:24:35.377654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.196 [2024-07-22 23:24:35.378216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.196 [2024-07-22 23:24:35.378268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.196 [2024-07-22 23:24:35.378300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.196 [2024-07-22 23:24:35.385448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.196 [2024-07-22 23:24:35.394359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.196 [2024-07-22 23:24:35.395051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.196 [2024-07-22 23:24:35.395119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.196 [2024-07-22 23:24:35.395157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.196 [2024-07-22 23:24:35.395561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.196 [2024-07-22 23:24:35.396039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.196 [2024-07-22 23:24:35.396091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.196 [2024-07-22 23:24:35.396125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.196 [2024-07-22 23:24:35.403589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.196 [2024-07-22 23:24:35.413258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.196 [2024-07-22 23:24:35.414106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.196 [2024-07-22 23:24:35.414175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.196 [2024-07-22 23:24:35.414214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.196 [2024-07-22 23:24:35.414639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.196 [2024-07-22 23:24:35.415189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.197 [2024-07-22 23:24:35.415241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.197 [2024-07-22 23:24:35.415273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.197 [2024-07-22 23:24:35.422440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.197 [2024-07-22 23:24:35.431119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.197 [2024-07-22 23:24:35.431794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.197 [2024-07-22 23:24:35.431865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.197 [2024-07-22 23:24:35.431904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.197 [2024-07-22 23:24:35.432445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.197 [2024-07-22 23:24:35.432893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.197 [2024-07-22 23:24:35.432946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.197 [2024-07-22 23:24:35.432979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.197 [2024-07-22 23:24:35.440173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.197 [2024-07-22 23:24:35.449390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.197 [2024-07-22 23:24:35.450232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.197 [2024-07-22 23:24:35.450300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.197 [2024-07-22 23:24:35.450362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.197 [2024-07-22 23:24:35.450901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.197 [2024-07-22 23:24:35.451469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.197 [2024-07-22 23:24:35.451523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.197 [2024-07-22 23:24:35.451556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.197 [2024-07-22 23:24:35.459707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.197 [2024-07-22 23:24:35.468381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.197 [2024-07-22 23:24:35.469206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.197 [2024-07-22 23:24:35.469273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.197 [2024-07-22 23:24:35.469330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.197 [2024-07-22 23:24:35.469733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.197 [2024-07-22 23:24:35.470281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.197 [2024-07-22 23:24:35.470351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.197 [2024-07-22 23:24:35.470396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.197 [2024-07-22 23:24:35.477462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.197 [2024-07-22 23:24:35.487385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.197 [2024-07-22 23:24:35.488195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.197 [2024-07-22 23:24:35.488264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.197 [2024-07-22 23:24:35.488302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.197 [2024-07-22 23:24:35.488725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.197 [2024-07-22 23:24:35.489273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.197 [2024-07-22 23:24:35.489340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.197 [2024-07-22 23:24:35.489377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.197 [2024-07-22 23:24:35.496468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.197 [2024-07-22 23:24:35.504372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.197 [2024-07-22 23:24:35.504876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.197 [2024-07-22 23:24:35.504920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.197 [2024-07-22 23:24:35.504942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.197 [2024-07-22 23:24:35.505447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.457 [2024-07-22 23:24:35.505909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.457 [2024-07-22 23:24:35.505964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.457 [2024-07-22 23:24:35.505997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.457 [2024-07-22 23:24:35.511488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.457 [2024-07-22 23:24:35.520058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.457 [2024-07-22 23:24:35.520778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.457 [2024-07-22 23:24:35.520848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.457 [2024-07-22 23:24:35.520887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.457 [2024-07-22 23:24:35.521420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.457 [2024-07-22 23:24:35.521861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.457 [2024-07-22 23:24:35.521915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.457 [2024-07-22 23:24:35.521948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.457 [2024-07-22 23:24:35.529130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.457 [2024-07-22 23:24:35.538006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.457 [2024-07-22 23:24:35.538697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.457 [2024-07-22 23:24:35.538767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.457 [2024-07-22 23:24:35.538807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.457 [2024-07-22 23:24:35.539367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.457 [2024-07-22 23:24:35.539776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.457 [2024-07-22 23:24:35.539839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.457 [2024-07-22 23:24:35.539873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.457 [2024-07-22 23:24:35.546454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.457 [2024-07-22 23:24:35.555729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.457 [2024-07-22 23:24:35.556508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.457 [2024-07-22 23:24:35.556546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.457 [2024-07-22 23:24:35.556568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.457 [2024-07-22 23:24:35.557055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.457 [2024-07-22 23:24:35.557534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.457 [2024-07-22 23:24:35.557580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.457 [2024-07-22 23:24:35.557608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.457 [2024-07-22 23:24:35.564539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.457 [2024-07-22 23:24:35.573244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.457 [2024-07-22 23:24:35.573810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.457 [2024-07-22 23:24:35.573850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.457 [2024-07-22 23:24:35.573873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.457 [2024-07-22 23:24:35.574168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.457 [2024-07-22 23:24:35.574481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.457 [2024-07-22 23:24:35.574511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.457 [2024-07-22 23:24:35.574530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.457 [2024-07-22 23:24:35.578956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.457 [2024-07-22 23:24:35.591290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.457 [2024-07-22 23:24:35.591862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.457 [2024-07-22 23:24:35.591901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.457 [2024-07-22 23:24:35.591924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.457 [2024-07-22 23:24:35.592219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.457 [2024-07-22 23:24:35.592534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.457 [2024-07-22 23:24:35.592565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.457 [2024-07-22 23:24:35.592612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.457 [2024-07-22 23:24:35.599810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.457 [2024-07-22 23:24:35.609335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.457 [2024-07-22 23:24:35.610014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.610084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.610124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.610556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.611072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.611126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.611160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.618424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.627145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.627807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.627877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.627916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.628457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.628869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.628923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.628958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.635842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.644917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.645607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.645675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.645715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.646252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.646692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.646748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.646782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.653501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.662090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.662669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.662707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.662729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.663022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.663333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.663363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.663381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.670409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.679808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.680581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.680639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.680691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.681232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.681610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.681641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.681660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.688467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.697594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.698377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.698416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.698438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.698817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.699384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.699415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.699434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.706440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.715422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.716071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.716140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.716179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.716595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.717141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.717195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.717230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.723798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.732120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.732787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.732858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.732899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.733435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.733900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.733967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.734004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.740940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.749789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.458 [2024-07-22 23:24:35.750658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.458 [2024-07-22 23:24:35.750727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.458 [2024-07-22 23:24:35.750766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.458 [2024-07-22 23:24:35.751303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.458 [2024-07-22 23:24:35.751759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.458 [2024-07-22 23:24:35.751813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.458 [2024-07-22 23:24:35.751847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.458 [2024-07-22 23:24:35.759186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.458 [2024-07-22 23:24:35.767389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.768061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.768132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.768173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.768575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.768985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.769040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.769074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.776003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.786099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.786790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.786859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.786899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.787445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.787886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.787940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.787975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.794538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.803624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.804511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.804550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.804601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.805140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.805562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.805594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.805613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.812669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.822063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.822744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.822814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.822853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.823413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.823835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.823889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.823923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.831075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.839754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.840499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.840537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.840559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.841069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.841521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.841552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.841571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.848573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.857746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.858594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.858661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.858700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.859250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.859818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.859874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.859909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.868061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.875610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.876438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.876476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.876498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.877006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.877486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.877516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.877535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.884971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.893432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.894191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.894260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.719 [2024-07-22 23:24:35.894299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.719 [2024-07-22 23:24:35.894657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.719 [2024-07-22 23:24:35.895195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.719 [2024-07-22 23:24:35.895250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.719 [2024-07-22 23:24:35.895283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.719 [2024-07-22 23:24:35.902703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.719 [2024-07-22 23:24:35.912371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.719 [2024-07-22 23:24:35.913229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.719 [2024-07-22 23:24:35.913298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.720 [2024-07-22 23:24:35.913361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.720 [2024-07-22 23:24:35.913764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.720 [2024-07-22 23:24:35.914331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.720 [2024-07-22 23:24:35.914393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.720 [2024-07-22 23:24:35.914420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.720 [2024-07-22 23:24:35.921507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.720 [2024-07-22 23:24:35.930395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.720 [2024-07-22 23:24:35.931071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.720 [2024-07-22 23:24:35.931142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.720 [2024-07-22 23:24:35.931182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.720 [2024-07-22 23:24:35.931580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.720 [2024-07-22 23:24:35.932120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.720 [2024-07-22 23:24:35.932175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.720 [2024-07-22 23:24:35.932209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.720 [2024-07-22 23:24:35.939648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.720 [2024-07-22 23:24:35.949276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.720 [2024-07-22 23:24:35.950129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.720 [2024-07-22 23:24:35.950197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.720 [2024-07-22 23:24:35.950237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.720 [2024-07-22 23:24:35.950666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.720 [2024-07-22 23:24:35.951216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.720 [2024-07-22 23:24:35.951269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.720 [2024-07-22 23:24:35.951303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.720 [2024-07-22 23:24:35.958419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.720 [2024-07-22 23:24:35.967589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.720 [2024-07-22 23:24:35.968388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.720 [2024-07-22 23:24:35.968459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.720 [2024-07-22 23:24:35.968500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.720 [2024-07-22 23:24:35.969038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.720 [2024-07-22 23:24:35.969526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.720 [2024-07-22 23:24:35.969557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.720 [2024-07-22 23:24:35.969576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.720 [2024-07-22 23:24:35.976757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.720 [2024-07-22 23:24:35.985933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.720 [2024-07-22 23:24:35.986788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.720 [2024-07-22 23:24:35.986858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.720 [2024-07-22 23:24:35.986896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.720 [2024-07-22 23:24:35.987438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.720 [2024-07-22 23:24:35.987879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.720 [2024-07-22 23:24:35.987934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.720 [2024-07-22 23:24:35.987968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.720 [2024-07-22 23:24:35.995193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.720 [2024-07-22 23:24:36.004253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.720 [2024-07-22 23:24:36.005120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.720 [2024-07-22 23:24:36.005190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.720 [2024-07-22 23:24:36.005230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.720 [2024-07-22 23:24:36.005658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.720 [2024-07-22 23:24:36.006208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.720 [2024-07-22 23:24:36.006262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.720 [2024-07-22 23:24:36.006296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.720 [2024-07-22 23:24:36.013442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.720 [2024-07-22 23:24:36.022128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.720 [2024-07-22 23:24:36.022842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.720 [2024-07-22 23:24:36.022911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.720 [2024-07-22 23:24:36.022949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.720 [2024-07-22 23:24:36.023469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.720 [2024-07-22 23:24:36.023837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.720 [2024-07-22 23:24:36.023891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.720 [2024-07-22 23:24:36.023926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.030267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.040062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.040781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.040851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.040891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.041424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.041897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.041953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.041986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.048826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.057950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.058663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.058733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.058773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.059335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.059722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.059777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.059811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.066877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.075578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.076419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.076458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.076480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.076954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.077459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.077490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.077509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.084805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.094451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.095286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.095383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.095407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.095810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.096391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.096422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.096441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.103536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.112751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.113631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.113700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.113739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.114278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.114848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.114901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.114936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.123095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.130646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.131431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.131469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.131491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.131994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.132481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.132512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.132530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.139942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.149588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.150413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.150483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.150522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.151059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.151530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.151562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.151581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.158769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.982 [2024-07-22 23:24:36.167541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.982 [2024-07-22 23:24:36.168354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.982 [2024-07-22 23:24:36.168392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.982 [2024-07-22 23:24:36.168421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.982 [2024-07-22 23:24:36.168838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.982 [2024-07-22 23:24:36.169394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.982 [2024-07-22 23:24:36.169425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.982 [2024-07-22 23:24:36.169444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.982 [2024-07-22 23:24:36.176558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.983 [2024-07-22 23:24:36.185439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.983 [2024-07-22 23:24:36.186153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.983 [2024-07-22 23:24:36.186221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.983 [2024-07-22 23:24:36.186261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.983 [2024-07-22 23:24:36.186665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.983 [2024-07-22 23:24:36.187216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.983 [2024-07-22 23:24:36.187268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.983 [2024-07-22 23:24:36.187303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.983 [2024-07-22 23:24:36.194375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.983 [2024-07-22 23:24:36.203070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.983 [2024-07-22 23:24:36.203772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.983 [2024-07-22 23:24:36.203843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.983 [2024-07-22 23:24:36.203883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.983 [2024-07-22 23:24:36.204414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.983 [2024-07-22 23:24:36.204834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.983 [2024-07-22 23:24:36.204889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.983 [2024-07-22 23:24:36.204924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.983 [2024-07-22 23:24:36.212028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.983 [2024-07-22 23:24:36.220756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.983 [2024-07-22 23:24:36.221579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.983 [2024-07-22 23:24:36.221640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.983 [2024-07-22 23:24:36.221681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.983 [2024-07-22 23:24:36.222220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.983 [2024-07-22 23:24:36.222658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.983 [2024-07-22 23:24:36.222715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.983 [2024-07-22 23:24:36.222750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.983 [2024-07-22 23:24:36.229837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.983 [2024-07-22 23:24:36.238580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.983 [2024-07-22 23:24:36.239386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.983 [2024-07-22 23:24:36.239425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.983 [2024-07-22 23:24:36.239447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.983 [2024-07-22 23:24:36.239861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.983 [2024-07-22 23:24:36.240407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.983 [2024-07-22 23:24:36.240437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.983 [2024-07-22 23:24:36.240456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.983 [2024-07-22 23:24:36.247532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.983 [2024-07-22 23:24:36.256400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.983 [2024-07-22 23:24:36.257106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.983 [2024-07-22 23:24:36.257174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.983 [2024-07-22 23:24:36.257213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.983 [2024-07-22 23:24:36.257613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.983 [2024-07-22 23:24:36.258157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.983 [2024-07-22 23:24:36.258212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.983 [2024-07-22 23:24:36.258245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.983 [2024-07-22 23:24:36.265292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.983 [2024-07-22 23:24:36.274090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.983 [2024-07-22 23:24:36.274830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.983 [2024-07-22 23:24:36.274900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:43:59.983 [2024-07-22 23:24:36.274939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:43:59.983 [2024-07-22 23:24:36.275445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:43:59.983 [2024-07-22 23:24:36.275903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.983 [2024-07-22 23:24:36.275956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.983 [2024-07-22 23:24:36.275990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.983 [2024-07-22 23:24:36.283065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.983 [2024-07-22 23:24:36.291733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.292541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.292581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.292603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.292945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.293453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.293484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.293502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.299896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.309535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.310302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.310383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.310406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.310837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.311396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.311426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.311445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.318508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.327278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.327951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.328020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.328059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.328508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.329018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.329073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.329107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.336120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.344857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.345607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.345680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.345732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.346273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.346675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.346729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.346765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.353834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.362496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.363280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.363372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.363396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.363762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.364343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.364373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.364392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.371430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.380086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.380780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.380850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.380889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.381419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.381792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.381846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.381879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.388901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.397569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.398441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.398513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.398553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.399091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.399548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.399586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.399630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.406826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.415191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.415878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.415948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.415987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.416475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.416921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.416975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.417010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.245 [2024-07-22 23:24:36.424095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.245 [2024-07-22 23:24:36.432816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.245 [2024-07-22 23:24:36.433583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.245 [2024-07-22 23:24:36.433642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.245 [2024-07-22 23:24:36.433684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.245 [2024-07-22 23:24:36.434222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.245 [2024-07-22 23:24:36.434630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.245 [2024-07-22 23:24:36.434686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.245 [2024-07-22 23:24:36.434720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.246 [2024-07-22 23:24:36.441858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.246 [2024-07-22 23:24:36.450814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.246 [2024-07-22 23:24:36.451665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.246 [2024-07-22 23:24:36.451733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.246 [2024-07-22 23:24:36.451772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.246 [2024-07-22 23:24:36.452332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.246 [2024-07-22 23:24:36.452880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.246 [2024-07-22 23:24:36.452932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.246 [2024-07-22 23:24:36.452966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.246 [2024-07-22 23:24:36.461123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.246 [2024-07-22 23:24:36.468695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.246 [2024-07-22 23:24:36.469549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.246 [2024-07-22 23:24:36.469628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.246 [2024-07-22 23:24:36.469668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.246 [2024-07-22 23:24:36.470206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.246 [2024-07-22 23:24:36.470596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.246 [2024-07-22 23:24:36.470628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.246 [2024-07-22 23:24:36.470647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.246 [2024-07-22 23:24:36.478096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.246 [2024-07-22 23:24:36.486511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.246 [2024-07-22 23:24:36.487286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.246 [2024-07-22 23:24:36.487372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.246 [2024-07-22 23:24:36.487395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.246 [2024-07-22 23:24:36.487792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.246 [2024-07-22 23:24:36.488367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.246 [2024-07-22 23:24:36.488398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.246 [2024-07-22 23:24:36.488417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.246 [2024-07-22 23:24:36.495935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.246 [2024-07-22 23:24:36.505401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.246 [2024-07-22 23:24:36.506238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.246 [2024-07-22 23:24:36.506307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.246 [2024-07-22 23:24:36.506369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.246 [2024-07-22 23:24:36.506773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.246 [2024-07-22 23:24:36.507342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.246 [2024-07-22 23:24:36.507401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.246 [2024-07-22 23:24:36.507420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.246 [2024-07-22 23:24:36.514492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.246 [2024-07-22 23:24:36.523693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.246 [2024-07-22 23:24:36.524509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.246 [2024-07-22 23:24:36.524578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.246 [2024-07-22 23:24:36.524619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.246 [2024-07-22 23:24:36.525169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.246 [2024-07-22 23:24:36.525593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.246 [2024-07-22 23:24:36.525653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.246 [2024-07-22 23:24:36.525691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.246 [2024-07-22 23:24:36.532874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.246 [2024-07-22 23:24:36.541987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.246 [2024-07-22 23:24:36.542727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.246 [2024-07-22 23:24:36.542797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.246 [2024-07-22 23:24:36.542837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.246 [2024-07-22 23:24:36.543401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.246 [2024-07-22 23:24:36.543796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.246 [2024-07-22 23:24:36.543850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.246 [2024-07-22 23:24:36.543884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.246 [2024-07-22 23:24:36.550718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.506 [2024-07-22 23:24:36.559781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.506 [2024-07-22 23:24:36.560612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.506 [2024-07-22 23:24:36.560684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.506 [2024-07-22 23:24:36.560724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.506 [2024-07-22 23:24:36.561263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.506 [2024-07-22 23:24:36.561674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.506 [2024-07-22 23:24:36.561732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.506 [2024-07-22 23:24:36.561767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.506 [2024-07-22 23:24:36.568812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.506 [2024-07-22 23:24:36.577599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.506 [2024-07-22 23:24:36.578378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.506 [2024-07-22 23:24:36.578416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.506 [2024-07-22 23:24:36.578438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.506 [2024-07-22 23:24:36.578893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.506 [2024-07-22 23:24:36.579431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.506 [2024-07-22 23:24:36.579462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.506 [2024-07-22 23:24:36.579488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.506 [2024-07-22 23:24:36.587461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.506 [2024-07-22 23:24:36.596618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.506 [2024-07-22 23:24:36.597403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.506 [2024-07-22 23:24:36.597473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.506 [2024-07-22 23:24:36.597512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.506 [2024-07-22 23:24:36.598050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.506 [2024-07-22 23:24:36.598623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.506 [2024-07-22 23:24:36.598677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.598711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.606696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 [2024-07-22 23:24:36.615894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.616763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.616834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.616872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 [2024-07-22 23:24:36.617426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.617854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.617908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.617942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.625115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 [2024-07-22 23:24:36.634155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.634895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.634966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.635005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 [2024-07-22 23:24:36.635498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.635987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.636041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.636074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.643261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 [2024-07-22 23:24:36.652552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.653383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.653463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.653504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 [2024-07-22 23:24:36.654043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.654522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.654554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.654572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.661725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 [2024-07-22 23:24:36.670864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.671719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.671788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.671828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 [2024-07-22 23:24:36.672401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.672811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.672865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.672899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.680083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1066314 Killed "${NVMF_APP[@]}" "$@" 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:44:00.507 [2024-07-22 23:24:36.688612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:00.507 [2024-07-22 23:24:36.689320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.689379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.689402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:00.507 [2024-07-22 23:24:36.689755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.690238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.690292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.690354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1067173 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1067173 00:44:00.507 [2024-07-22 23:24:36.697442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1067173 ']' 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:00.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:00.507 23:24:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:00.507 [2024-07-22 23:24:36.706582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.707383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.707423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.707446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 [2024-07-22 23:24:36.707908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.708442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.708472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.708492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.715553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 [2024-07-22 23:24:36.724446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.725118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.725188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.725228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 [2024-07-22 23:24:36.725646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.726197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.726251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.726284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.733230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 [2024-07-22 23:24:36.741631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.742420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.742458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.742479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.507 [2024-07-22 23:24:36.742931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.507 [2024-07-22 23:24:36.743455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.507 [2024-07-22 23:24:36.743486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.507 [2024-07-22 23:24:36.743506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.507 [2024-07-22 23:24:36.750263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.507 [2024-07-22 23:24:36.759448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.507 [2024-07-22 23:24:36.760121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.507 [2024-07-22 23:24:36.760190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.507 [2024-07-22 23:24:36.760230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.508 [2024-07-22 23:24:36.760607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.508 [2024-07-22 23:24:36.761162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.508 [2024-07-22 23:24:36.761215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.508 [2024-07-22 23:24:36.761249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.508 [2024-07-22 23:24:36.766921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.508 [2024-07-22 23:24:36.777486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.508 [2024-07-22 23:24:36.778240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.508 [2024-07-22 23:24:36.778327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.508 [2024-07-22 23:24:36.778380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.508 [2024-07-22 23:24:36.778785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.508 [2024-07-22 23:24:36.779365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.508 [2024-07-22 23:24:36.779395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.508 [2024-07-22 23:24:36.779414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.508 [2024-07-22 23:24:36.786501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.508 [2024-07-22 23:24:36.794790] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:44:00.508 [2024-07-22 23:24:36.794957] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:00.508 [2024-07-22 23:24:36.795218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.508 [2024-07-22 23:24:36.795846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.508 [2024-07-22 23:24:36.795913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.508 [2024-07-22 23:24:36.795952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.508 [2024-07-22 23:24:36.796467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.508 [2024-07-22 23:24:36.796933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.508 [2024-07-22 23:24:36.796992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.508 [2024-07-22 23:24:36.797041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.508 [2024-07-22 23:24:36.803468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.508 [2024-07-22 23:24:36.812462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.508 [2024-07-22 23:24:36.812973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.508 [2024-07-22 23:24:36.813042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.508 [2024-07-22 23:24:36.813095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.508 [2024-07-22 23:24:36.813402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.508 [2024-07-22 23:24:36.813704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.508 [2024-07-22 23:24:36.813732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.508 [2024-07-22 23:24:36.813751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.768 [2024-07-22 23:24:36.819691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.768 [2024-07-22 23:24:36.829575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.768 [2024-07-22 23:24:36.830169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.768 [2024-07-22 23:24:36.830237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.768 [2024-07-22 23:24:36.830288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.768 [2024-07-22 23:24:36.830594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.768 [2024-07-22 23:24:36.830905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.768 [2024-07-22 23:24:36.830956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.768 [2024-07-22 23:24:36.830991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.768 [2024-07-22 23:24:36.838136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.768 [2024-07-22 23:24:36.847465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.768 [2024-07-22 23:24:36.848018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.768 [2024-07-22 23:24:36.848087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.768 [2024-07-22 23:24:36.848127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.768 [2024-07-22 23:24:36.848453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.768 [2024-07-22 23:24:36.848752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.768 [2024-07-22 23:24:36.848780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.768 [2024-07-22 23:24:36.848798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.768 [2024-07-22 23:24:36.855153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.768 [2024-07-22 23:24:36.865235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.768 [2024-07-22 23:24:36.865881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.768 [2024-07-22 23:24:36.865950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.768 [2024-07-22 23:24:36.866004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.768 [2024-07-22 23:24:36.866399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.768 [2024-07-22 23:24:36.866709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.768 [2024-07-22 23:24:36.866774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.768 [2024-07-22 23:24:36.866808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 EAL: No free 2048 kB hugepages reported on node 1 00:44:00.769 [2024-07-22 23:24:36.873305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.880994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.881467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.881504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.881525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.881819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.882117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.882145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.882163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.886590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.895989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.896487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.896524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.896545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.896837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.897135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.897162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.897180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.901624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.910781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.911337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.911375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.911404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.911698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.911996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.912024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.912041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.916486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.919733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:00.769 [2024-07-22 23:24:36.925643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.926150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.926188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.926212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.926520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.926820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.926848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.926867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.931329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.940504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.941054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.941107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.941131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.941444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.941776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.941805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.941825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.946250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.955410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.955901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.955938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.955960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.956254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.956577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.956607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.956629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.961054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.970203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.970743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.970784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.970805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.971103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.971415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.971443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.971462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.975882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.985063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:36.985604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:36.985658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:36.985683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:36.985984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:36.986285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:36.986326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:36.986361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:36.990799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:36.999954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:37.000518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:37.000559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:37.000583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.769 [2024-07-22 23:24:37.000883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.769 [2024-07-22 23:24:37.001184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.769 [2024-07-22 23:24:37.001212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.769 [2024-07-22 23:24:37.001232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.769 [2024-07-22 23:24:37.005680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.769 [2024-07-22 23:24:37.014820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.769 [2024-07-22 23:24:37.015329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.769 [2024-07-22 23:24:37.015367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.769 [2024-07-22 23:24:37.015388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.770 [2024-07-22 23:24:37.015681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.770 [2024-07-22 23:24:37.015981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.770 [2024-07-22 23:24:37.016009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.770 [2024-07-22 23:24:37.016029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.770 [2024-07-22 23:24:37.020466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.770 [2024-07-22 23:24:37.029637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.770 [2024-07-22 23:24:37.030190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.770 [2024-07-22 23:24:37.030228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.770 [2024-07-22 23:24:37.030250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.770 [2024-07-22 23:24:37.030558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.770 [2024-07-22 23:24:37.030616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:00.770 [2024-07-22 23:24:37.030663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:00.770 [2024-07-22 23:24:37.030682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:00.770 [2024-07-22 23:24:37.030699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:00.770 [2024-07-22 23:24:37.030713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:00.770 [2024-07-22 23:24:37.030858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.770 [2024-07-22 23:24:37.030884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.770 [2024-07-22 23:24:37.030903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.770 [2024-07-22 23:24:37.031070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:44:00.770 [2024-07-22 23:24:37.031152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:44:00.770 [2024-07-22 23:24:37.031157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:00.770 [2024-07-22 23:24:37.035450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.770 [2024-07-22 23:24:37.044615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.770 [2024-07-22 23:24:37.045202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.770 [2024-07-22 23:24:37.045249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.770 [2024-07-22 23:24:37.045274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.770 [2024-07-22 23:24:37.045588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.770 [2024-07-22 23:24:37.045907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.770 [2024-07-22 23:24:37.045935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.770 [2024-07-22 23:24:37.045956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.770 [2024-07-22 23:24:37.050390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.770 [2024-07-22 23:24:37.059553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.770 [2024-07-22 23:24:37.060171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.770 [2024-07-22 23:24:37.060219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.770 [2024-07-22 23:24:37.060244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.770 [2024-07-22 23:24:37.060557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.770 [2024-07-22 23:24:37.060862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.770 [2024-07-22 23:24:37.060891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.770 [2024-07-22 23:24:37.060913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:00.770 [2024-07-22 23:24:37.065349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:00.770 [2024-07-22 23:24:37.074512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:00.770 [2024-07-22 23:24:37.075134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.770 [2024-07-22 23:24:37.075184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:00.770 [2024-07-22 23:24:37.075209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:00.770 [2024-07-22 23:24:37.075528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:00.770 [2024-07-22 23:24:37.075833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:00.770 [2024-07-22 23:24:37.075862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:00.770 [2024-07-22 23:24:37.075883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.080323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.089479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.090104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.090154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.090179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.090498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.090803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.030 [2024-07-22 23:24:37.090832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.030 [2024-07-22 23:24:37.090854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.095297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.104448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.105009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.105050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.105073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.105384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.105686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.030 [2024-07-22 23:24:37.105714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.030 [2024-07-22 23:24:37.105735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.110155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.119304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.119936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.119985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.120010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.120327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.120632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.030 [2024-07-22 23:24:37.120661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.030 [2024-07-22 23:24:37.120683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.125106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.134256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.134814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.134860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.134885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.135188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.135505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.030 [2024-07-22 23:24:37.135534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.030 [2024-07-22 23:24:37.135554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.139979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.149118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.149663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.149701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.149733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.150027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.150340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.030 [2024-07-22 23:24:37.150370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.030 [2024-07-22 23:24:37.150388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.154810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.163952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.164490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.164529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.164551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.164844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.165143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.030 [2024-07-22 23:24:37.165171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.030 [2024-07-22 23:24:37.165189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.169621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.178763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.179299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.179344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.179366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.179659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.179959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.030 [2024-07-22 23:24:37.179988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.030 [2024-07-22 23:24:37.180006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.030 [2024-07-22 23:24:37.184444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.030 [2024-07-22 23:24:37.193582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.030 [2024-07-22 23:24:37.194076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.030 [2024-07-22 23:24:37.194114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.030 [2024-07-22 23:24:37.194135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.030 [2024-07-22 23:24:37.194440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.030 [2024-07-22 23:24:37.194740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.194775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.194794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.199219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.208366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.208893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.208931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.208952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.209245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.209555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.209585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.209604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.214027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.223172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.223716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.223754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.223775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.224067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.224379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.224408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.224427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.228849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.238013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.238522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.238560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.238581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.238875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.239174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.239202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.239220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.243656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.252799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.253298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.253344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.253386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.253685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.253983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.254012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.254030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.258459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.267595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.268143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.268180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.268201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.268505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.268804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.268832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.268850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.273271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.282411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.282954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.282992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.283013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.283305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:01.031 [2024-07-22 23:24:37.283616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.283646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.283664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:01.031 [2024-07-22 23:24:37.288086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.297240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.297704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.297742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.297764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.298057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.298368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.298397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.298417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.302837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 [2024-07-22 23:24:37.312239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.312748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.312786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.312807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.313100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.313411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.313440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.031 [2024-07-22 23:24:37.313459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.031 [2024-07-22 23:24:37.317880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.031 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:01.031 [2024-07-22 23:24:37.325253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:01.031 [2024-07-22 23:24:37.327033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.031 [2024-07-22 23:24:37.327588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.031 [2024-07-22 23:24:37.327625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.031 [2024-07-22 23:24:37.327647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.031 [2024-07-22 23:24:37.327948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.031 [2024-07-22 23:24:37.328247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.031 [2024-07-22 23:24:37.328275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.032 [2024-07-22 23:24:37.328293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.032 [2024-07-22 23:24:37.334548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.291 [2024-07-22 23:24:37.343758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.291 [2024-07-22 23:24:37.344224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.291 [2024-07-22 23:24:37.344261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.291 [2024-07-22 23:24:37.344282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.291 [2024-07-22 23:24:37.344586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.291 [2024-07-22 23:24:37.344885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.291 [2024-07-22 23:24:37.344914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.291 [2024-07-22 23:24:37.344932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:01.291 [2024-07-22 23:24:37.349364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.291 [2024-07-22 23:24:37.358533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.291 [2024-07-22 23:24:37.359144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.291 [2024-07-22 23:24:37.359187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.291 [2024-07-22 23:24:37.359211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.291 [2024-07-22 23:24:37.359524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.291 [2024-07-22 23:24:37.359828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.291 [2024-07-22 23:24:37.359857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.291 [2024-07-22 23:24:37.359877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.291 [2024-07-22 23:24:37.364294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.291 [2024-07-22 23:24:37.373459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.291 [2024-07-22 23:24:37.374070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.291 [2024-07-22 23:24:37.374115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.291 [2024-07-22 23:24:37.374140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.291 [2024-07-22 23:24:37.374457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.291 Malloc0 00:44:01.291 [2024-07-22 23:24:37.374760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.291 [2024-07-22 23:24:37.374788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.291 [2024-07-22 23:24:37.374809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:01.291 [2024-07-22 23:24:37.379242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.291 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:01.291 [2024-07-22 23:24:37.388394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.291 [2024-07-22 23:24:37.388947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.291 [2024-07-22 23:24:37.388984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19031e0 with addr=10.0.0.2, port=4420 00:44:01.291 [2024-07-22 23:24:37.389005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19031e0 is same with the state(5) to be set 00:44:01.291 [2024-07-22 23:24:37.389298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19031e0 (9): Bad file descriptor 00:44:01.291 [2024-07-22 23:24:37.389607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.291 [2024-07-22 23:24:37.389635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.291 [2024-07-22 23:24:37.389654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.292 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.292 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:01.292 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.292 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:01.292 [2024-07-22 23:24:37.394074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.292 [2024-07-22 23:24:37.394472] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:01.292 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.292 23:24:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1066545 00:44:01.292 [2024-07-22 23:24:37.403209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.292 [2024-07-22 23:24:37.567629] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:11.278 00:44:11.278 Latency(us) 00:44:11.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:11.278 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:11.278 Verification LBA range: start 0x0 length 0x4000 00:44:11.278 Nvme1n1 : 15.02 4902.02 19.15 5212.03 0.00 12616.01 755.48 23981.32 00:44:11.278 =================================================================================================================== 00:44:11.278 Total : 4902.02 19.15 5212.03 0.00 12616.01 755.48 23981.32 00:44:11.278 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:44:11.278 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:11.279 rmmod nvme_tcp 00:44:11.279 rmmod nvme_fabrics 00:44:11.279 rmmod nvme_keyring 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1067173 ']' 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1067173 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1067173 ']' 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1067173 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1067173 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1067173' 00:44:11.279 killing process with pid 1067173 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1067173 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1067173 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:11.279 23:24:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:13.189 00:44:13.189 real 0m24.258s 00:44:13.189 user 1m1.707s 00:44:13.189 sys 0m5.797s 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:13.189 ************************************ 00:44:13.189 END TEST nvmf_bdevperf 00:44:13.189 ************************************ 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:44:13.189 ************************************ 00:44:13.189 START TEST nvmf_target_disconnect 00:44:13.189 ************************************ 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:44:13.189 * Looking for test storage... 00:44:13.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:44:13.189 23:24:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:44:16.518 Found 0000:84:00.0 (0x8086 - 0x159b) 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:16.518 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:44:16.519 Found 0000:84:00.1 (0x8086 - 0x159b) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:44:16.519 Found net devices under 0000:84:00.0: cvl_0_0 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:44:16.519 Found net devices under 0000:84:00.1: cvl_0_1 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:16.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:16.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:44:16.519 00:44:16.519 --- 10.0.0.2 ping statistics --- 00:44:16.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:16.519 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:16.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:16.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:44:16.519 00:44:16.519 --- 10.0.0.1 ping statistics --- 00:44:16.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:16.519 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:44:16.519 ************************************ 00:44:16.519 START TEST nvmf_target_disconnect_tc1 00:44:16.519 ************************************ 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:44:16.519 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:16.519 EAL: No free 2048 kB hugepages reported on node 1 00:44:16.779 [2024-07-22 23:24:52.832333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:16.779 [2024-07-22 23:24:52.832415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3baf0 with addr=10.0.0.2, port=4420 00:44:16.779 [2024-07-22 23:24:52.832460] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:44:16.779 [2024-07-22 23:24:52.832493] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:16.779 [2024-07-22 23:24:52.832511] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:44:16.779 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:44:16.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:44:16.779 Initializing NVMe Controllers 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:16.779 00:44:16.779 real 0m0.165s 00:44:16.779 user 0m0.071s 00:44:16.779 sys 0m0.093s 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:44:16.779 ************************************ 00:44:16.779 END TEST nvmf_target_disconnect_tc1 00:44:16.779 ************************************ 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:44:16.779 ************************************ 00:44:16.779 START TEST nvmf_target_disconnect_tc2 00:44:16.779 ************************************ 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1070433 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1070433 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1070433 ']' 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:16.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:16.779 23:24:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:16.779 [2024-07-22 23:24:53.030180] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:44:16.779 [2024-07-22 23:24:53.030361] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:17.039 EAL: No free 2048 kB hugepages reported on node 1 00:44:17.039 [2024-07-22 23:24:53.195182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:17.300 [2024-07-22 23:24:53.364885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:17.300 [2024-07-22 23:24:53.364991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:17.300 [2024-07-22 23:24:53.365037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:17.300 [2024-07-22 23:24:53.365068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:17.300 [2024-07-22 23:24:53.365094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:17.300 [2024-07-22 23:24:53.365772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:44:17.300 [2024-07-22 23:24:53.365852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:44:17.300 [2024-07-22 23:24:53.365938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:44:17.300 [2024-07-22 23:24:53.365946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:44:17.300 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:17.300 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:44:17.300 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:17.300 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:17.300 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:17.561 Malloc0 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:17.561 [2024-07-22 23:24:53.677843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:17.561 [2024-07-22 23:24:53.726382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1070487 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:17.561 23:24:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:44:17.561 EAL: No free 2048 kB hugepages reported on node 1 00:44:19.473 23:24:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1070433 00:44:19.473 23:24:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 [2024-07-22 23:24:55.760098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 [2024-07-22 23:24:55.760778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Write completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 [2024-07-22 23:24:55.761481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.473 starting I/O failed 00:44:19.473 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Read completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 Write completed with error (sct=0, sc=8) 00:44:19.474 starting I/O failed 00:44:19.474 [2024-07-22 23:24:55.761872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:19.474 [2024-07-22 23:24:55.762101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.762194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.762461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.762501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.762678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.762744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.762956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.762992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.763173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.763247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.763439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.763476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.763646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.763681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.763910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.763977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.764203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.764269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.764492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.764528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.764720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.764787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.765006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.765073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.765367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.765404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.765527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.765563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.765758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.765829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.766065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.766130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.766398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.766435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.766585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.766652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.766894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.766930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.767137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.767202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.767420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.767456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.767604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.767639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.767854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.767920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.768175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.768240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.768443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.768480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.768654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.768730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.474 [2024-07-22 23:24:55.768944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.474 [2024-07-22 23:24:55.769009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.474 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.769223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.769258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.769398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.769435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.770208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.770273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.770531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.770567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.770731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.770796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.771052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.771117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.771323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.771378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.771599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.771664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.771912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.771977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.772228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.772264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.772468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.772505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.772702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.772767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.773022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.773059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.773229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.773293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.773543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.773617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.773840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.773877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.774086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.774152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.774377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.774444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.774694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.774731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.774876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.774941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.775196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.775261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.775521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.775558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.775698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.775766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.775984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.776055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.776286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.776330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.776478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.776543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.776800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.776865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.777117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.777153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.777356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.777423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.777675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.777743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.778007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.778043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.778337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.778410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.778668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.778734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.779012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.779049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.779233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.779301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.779516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.779552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.779731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.779768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.475 [2024-07-22 23:24:55.779910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.475 [2024-07-22 23:24:55.779979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.475 qpair failed and we were unable to recover it. 00:44:19.476 [2024-07-22 23:24:55.780215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.476 [2024-07-22 23:24:55.780281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.476 qpair failed and we were unable to recover it. 00:44:19.476 [2024-07-22 23:24:55.780480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.476 [2024-07-22 23:24:55.780517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.476 qpair failed and we were unable to recover it. 00:44:19.476 [2024-07-22 23:24:55.780687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.476 [2024-07-22 23:24:55.780753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.476 qpair failed and we were unable to recover it. 00:44:19.476 [2024-07-22 23:24:55.781053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.476 [2024-07-22 23:24:55.781119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.476 qpair failed and we were unable to recover it. 00:44:19.476 [2024-07-22 23:24:55.781344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.476 [2024-07-22 23:24:55.781382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.476 qpair failed and we were unable to recover it. 00:44:19.476 [2024-07-22 23:24:55.781612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.476 [2024-07-22 23:24:55.781677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.476 qpair failed and we were unable to recover it. 00:44:19.476 [2024-07-22 23:24:55.781947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.476 [2024-07-22 23:24:55.782014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.476 qpair failed and we were unable to recover it. 00:44:19.748 [2024-07-22 23:24:55.782255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.748 [2024-07-22 23:24:55.782293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.748 qpair failed and we were unable to recover it. 00:44:19.748 [2024-07-22 23:24:55.782537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.748 [2024-07-22 23:24:55.782615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.748 qpair failed and we were unable to recover it. 00:44:19.748 [2024-07-22 23:24:55.782875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.782949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.783236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.783274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.783485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.783522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.783720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.783785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.784103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.784139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.784424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.784492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.784753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.784819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.785084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.785121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.785358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.785425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.785747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.785813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.786135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.786197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.786475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.786542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.786843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.786909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.787174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.787210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.787382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.787450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.787756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.787823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.788082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.788119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.788373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.788440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.788752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.788831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.789060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.789108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.789242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.789276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.789468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.789532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.789797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.789834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.790108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.790173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.790461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.790828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.790865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.791100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.791167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.791471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.791538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.791796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.791834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.792026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.792092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.792321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.792380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.792612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.792648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.792977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.793043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.793350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.793416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.793646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.793683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.793906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.793971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.794236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.749 [2024-07-22 23:24:55.794302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.749 qpair failed and we were unable to recover it. 00:44:19.749 [2024-07-22 23:24:55.794522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.794559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.794815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.794880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.795134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.795211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.795530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.795568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.795839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.795906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.796202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.796269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.796584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.796621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.796931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.796996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.797265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.797346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.797658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.797695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.797951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.798016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.798335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.798403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.798664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.798701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.798970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.799035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.799291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.799377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.799699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.799735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.800057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.800123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.800432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.800500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.800764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.800801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.801116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.801182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.801475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.801512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.801710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.801747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.802013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.802078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.802298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.802375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.802677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.802713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.803022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.803089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.803368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.803435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.803718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.803754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.804034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.804099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.804365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.804434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.804727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.804765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.804931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.804997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.805298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.805376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.805613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.805651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.805902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.805967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.806259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.806340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.806641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.806678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.750 [2024-07-22 23:24:55.806915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.750 [2024-07-22 23:24:55.806981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.750 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.807251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.807328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.807637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.807673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.808000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.808066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.808369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.808437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.808704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.808747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.808940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.809007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.809222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.809287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.809604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.809641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.809888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.809955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.810250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.810327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.810561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.810598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.810834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.810899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.811150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.811216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.811479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.811516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.811758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.811824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.812118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.812185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.812410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.812447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.812684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.812750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.813085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.813151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.813419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.813456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.813640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.813715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.814014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.814080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.814333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.814370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.814572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.814638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.814907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.814973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.815267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.815304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.815560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.815627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.815933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.815999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.816300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.816352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.816647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.816713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.816973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.817039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.817319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.817357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.817556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.817621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.817882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.817948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.818180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.818246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.818568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.818605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.818819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.751 [2024-07-22 23:24:55.818885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.751 qpair failed and we were unable to recover it. 00:44:19.751 [2024-07-22 23:24:55.819187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.819224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.819491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.819529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.819745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.819811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.820087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.820123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.820291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.820371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.820661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.820728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.821017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.821053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.821281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.821373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.821677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.821743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.822044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.822081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.822355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.822423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.822642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.822707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.822989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.823025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.823270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.823347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.823653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.823718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.824023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.824060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.824375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.824441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.824717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.824784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.825081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.825118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.825424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.825491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.825742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.825808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.826119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.826155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.826459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.826526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.826793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.826859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.827113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.827408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.827476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.827789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.827856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.828160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.828197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.828506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.828543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.828777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.828843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.829112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.829149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.829383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.829451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.829745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.829811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.830110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.830146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.830405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.830472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.830787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.830853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.831174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.831211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.831547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.752 [2024-07-22 23:24:55.831585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.752 qpair failed and we were unable to recover it. 00:44:19.752 [2024-07-22 23:24:55.831863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.831929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.832231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.832268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.832563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.832600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.832868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.832934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.833168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.833205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.833434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.833503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.833802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.833867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.834163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.834200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.834499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.834566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.834838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.834913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.835177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.835214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.835483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.835550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.835805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.835870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.836142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.836189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.836423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.836490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.836707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.836773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.837047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.837084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.837322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.837388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.837664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.837730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.838054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.838091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.838411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.838477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.838772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.838838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.839155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.839191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.839448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.839485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.839672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.839738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.840035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.840071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.840388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.840454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.840702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.840768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.841027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.841063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.841298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.753 [2024-07-22 23:24:55.841378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.753 qpair failed and we were unable to recover it. 00:44:19.753 [2024-07-22 23:24:55.841573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.841644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.841897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.841933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.842161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.842227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.842493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.842558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.842866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.842902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.843206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.843272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.843561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.843628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.843852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.843888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.844106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.844173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.844423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.844460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.844607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.844644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.844902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.844966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.845264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.845351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.845581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.845618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.845795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.845859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.846158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.846224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.846479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.846516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.846693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.846759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.847046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.847112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.847377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.847419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.847614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.847679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.847968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.848034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.848349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.848385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.848613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.848678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.848864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.848929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.849147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.849183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.849340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.849423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.849664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.849729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.849911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.849947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.850123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.850188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.850410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.850476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.850736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.850773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.850941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.851006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.851277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.851356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.851551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.851597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.851844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.851910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.852193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.852259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.754 qpair failed and we were unable to recover it. 00:44:19.754 [2024-07-22 23:24:55.852488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.754 [2024-07-22 23:24:55.852525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.852717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.852782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.853043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.853108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.853375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.853412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.853596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.853661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.853958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.854022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.854343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.854404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.854622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.854688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.854984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.855049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.855349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.855393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.855621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.855686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.855946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.856010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.856322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.856385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.856609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.856674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.856992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.857056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.857333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.857374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.857540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.857613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.857910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.857976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.858255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.858347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.858519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.858555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.858854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.858919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.859186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.859251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.859480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.859521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.859668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.859733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.860016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.860053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.860328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.860393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.860684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.860750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.861049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.861085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.861401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.861467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.861773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.861837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.862130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.862167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.862426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.862492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.862790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.862855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.863109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.863143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.863363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.863429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.863663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.863728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.864001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.864037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.864337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.755 [2024-07-22 23:24:55.864414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.755 qpair failed and we were unable to recover it. 00:44:19.755 [2024-07-22 23:24:55.864648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.864713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.865018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.865065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.865389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.865455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.865717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.865782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.866091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.866128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.866390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.866456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.866712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.866777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.867052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.867088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.867368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.867434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.867681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.867746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.867997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.868033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.868260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.868340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.868546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.868612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.868915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.868951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.869271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.869362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.869542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.869608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.869886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.869922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.870194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.870259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.870500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.870565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.870867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.870903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.871195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.871260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.871479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.871544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.871816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.871853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.872121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.872186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.872434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.872510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.872822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.872858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.873108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.873172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.873418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.873485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.873725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.873762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.873946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.874011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.874270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.874347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.874581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.874617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.874851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.874916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.875176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.875241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.875568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.875605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.875870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.875935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.876193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.876261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.876584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.756 [2024-07-22 23:24:55.876641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.756 qpair failed and we were unable to recover it. 00:44:19.756 [2024-07-22 23:24:55.876947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.877014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.877329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.877392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.877600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.877637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.877880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.877946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.878217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.878283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.878610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.878648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.878956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.879022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.879340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.879407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.879668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.879705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.879971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.880037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.880364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.880431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.880710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.880747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.880931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.880997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.881307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.881391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.881690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.881727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.882025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.882090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.882400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.882467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.882776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.882812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.882995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.883070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.883341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.883407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.883691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.883728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.883978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.884044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.884725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.884828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.885102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.885142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.885379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.885447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.885701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.885768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.886064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.886108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.886379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.886417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.886647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.886714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.887019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.887056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.887366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.887434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.887641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.887707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.888007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.888045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.888353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.888421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.888723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.888789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.889090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.889127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.889426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.889493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.889753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.757 [2024-07-22 23:24:55.889818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.757 qpair failed and we were unable to recover it. 00:44:19.757 [2024-07-22 23:24:55.890112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.890149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.890417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.890484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.890803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.890867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.891170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.891207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.891481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.891518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.891728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.891793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.892088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.892125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.892378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.892446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.892707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.892774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.893079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.893115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.893387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.893453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.893727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.893793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.894051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.894090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.894259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.894339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.894642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.894708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.895016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.895053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.895372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.895438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.895743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.895809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.896108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.896145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.896452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.896520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.896789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.896856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.897132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.897379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.897446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.897743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.897809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.898070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.898107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.898362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.898429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.898685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.898751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.899054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.899091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.899340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.899416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.899684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.899750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.900060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.900097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.900355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.758 [2024-07-22 23:24:55.900393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.758 qpair failed and we were unable to recover it. 00:44:19.758 [2024-07-22 23:24:55.900531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.900566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.900832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.900869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.901075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.901141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.901435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.901503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.901731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.901767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.902027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.902093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.902356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.902423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.902738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.902774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.903096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.903162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.903430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.903497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.903769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.903816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.904006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.904072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.904379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.904447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.904715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.904751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.904975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.905040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.905331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.905399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.905662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.905698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.905910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.905976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.906234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.906299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.906617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.906654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.906907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.906972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.907242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.907328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.907589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.907625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.907780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.907854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.908168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.908234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.908568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.908622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.908927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.908993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.909293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.909387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.909543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.909579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.909797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.909863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.910165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.910232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.910566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.910628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.910921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.910988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.911245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.911330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.911599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.911636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.911903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.911970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.912271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.912354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.759 [2024-07-22 23:24:55.912630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.759 [2024-07-22 23:24:55.912667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.759 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.912919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.912985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.913237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.913303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.913627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.913664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.913940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.914005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.914261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.914346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.914617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.914654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.914845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.914911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.915168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.915234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.915537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.915605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.915868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.915933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.916224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.916289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.916591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.916627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.916897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.916963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.917265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.917350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.917654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.917690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.917943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.918009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.918264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.918376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.918622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.918674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.918973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.919039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.919295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.919381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.919685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.919721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.920029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.920095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.920395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.920464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.920729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.920765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.921048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.921114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.921339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.921415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.921669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.921706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.921937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.922002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.922290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.922384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.922643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.922680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.922872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.922937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.923235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.923301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.923624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.923661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.923915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.923981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.924257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.924339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.924642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.924678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.924920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.924985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.760 [2024-07-22 23:24:55.925235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.760 [2024-07-22 23:24:55.925301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.760 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.925626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.925663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.925960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.926026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.926340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.926407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.926678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.926712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.926924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.926990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.927281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.927366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.927561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.927598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.927840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.927906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.928207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.928273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.928619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.928721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.929040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.929112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.929429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.929498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.929791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.929829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.930059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.930126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.930441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.930510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.930811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.930848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.931110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.931177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.931443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.931509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.931770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.931807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.932009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.932075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.932377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.932443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.932704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.932741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.932968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.933034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.933346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.933414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.933718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.933755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.934032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.934098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.934400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.934467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.934762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.934805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.935065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.935130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.935429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.935497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.935801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.935837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.936028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.936093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.936281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.936374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.936608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.936645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.936913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.936980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.937290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.937378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.937634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.761 [2024-07-22 23:24:55.937670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.761 qpair failed and we were unable to recover it. 00:44:19.761 [2024-07-22 23:24:55.937835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.937910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.938191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.938256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.938570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.938624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.938887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.938952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.939264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.939349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.939653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.939690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.939996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.940062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.940366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.940434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.940727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.940763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.940932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.941000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.941272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.941353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.941669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.941707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.942002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.942066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.942361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.942428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.942735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.942773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.943080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.943146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.943412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.943478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.943794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.943830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.944033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.944098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.944368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.944436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.944703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.944741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.944990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.945055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.945337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.945391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.945577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.945614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.945856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.945921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.946237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.946303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.946602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.946637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.946906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.946972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.947284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.947367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.947660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.947697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.947941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.948017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.948331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.948399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.762 qpair failed and we were unable to recover it. 00:44:19.762 [2024-07-22 23:24:55.948702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.762 [2024-07-22 23:24:55.948738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.949002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.949067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.949342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.949410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.949716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.949752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.950061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.950127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.950369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.950437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.950694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.950732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.950950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.951015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.951262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.951343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.951609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.951646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.951913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.951978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.952280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.952360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.952680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.952716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.952982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.953048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.953343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.953410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.953714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.953751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.954054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.954120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.954420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.954457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.954660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.954697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.954963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.955029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.955340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.955406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.955713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.955750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.956009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.956075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.956378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.956445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.956764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.956801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.957123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.957189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.957448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.957514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.957823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.957860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.958169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.958234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.958538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.958606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.958821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.958858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.959112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.959177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.959488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.959556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.959826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.959863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.960151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.960217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.960493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.960561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.960870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.960907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.961125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.763 [2024-07-22 23:24:55.961191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.763 qpair failed and we were unable to recover it. 00:44:19.763 [2024-07-22 23:24:55.961454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.961537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.961800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.961837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.962014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.962079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.962382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.962449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.962748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.962785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.963100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.963166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.963422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.963459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.963662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.963699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.963941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.964007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.964265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.964346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.964599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.964635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.964891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.964957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.965171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.965236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.965542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.965579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.965848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.965914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.966214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.966280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.966559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.966596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.966813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.966879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.967180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.967246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.967564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.967617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.967869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.967935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.968246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.968328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.968633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.968670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.968972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.969038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.969342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.969408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.969703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.969740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.970037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.970103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.970384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.970452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.970766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.970803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.971065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.971131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.971434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.971501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.971799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.971836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.972094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.972159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.972421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.972459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.972700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.972757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.973013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.973078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.973392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.973488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.764 [2024-07-22 23:24:55.973806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.764 [2024-07-22 23:24:55.973843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.764 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.974161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.974228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.974524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.974590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.974892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.974936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.975240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.975306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.975583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.975650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.975951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.975988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.976250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.976342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.976648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.976714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.976977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.977014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.977209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.977274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.977572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.977638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.977930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.977967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.978229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.978295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.978616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.978682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.978935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.978971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.979228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.979292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.979626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.979692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.980009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.980062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.980334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.980402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.980658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.980724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.980973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.981009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.981202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.981266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.981571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.981645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.981903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.981940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.982194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.982260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.982597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.982663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.982974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.983010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.983335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.983402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.983668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.983732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.984050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.984087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.984398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.984466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.984776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.984842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.985143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.985180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.985495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.985562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.985874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.985940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.986244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.986281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.986609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.986674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.765 qpair failed and we were unable to recover it. 00:44:19.765 [2024-07-22 23:24:55.986971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.765 [2024-07-22 23:24:55.987037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.987296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.987342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.987590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.987654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.987969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.988035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.988247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.988284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.988531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.988605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.988861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.988926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.989230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.989266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.989604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.989671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.989887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.989953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.990205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.990241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.990488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.990526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.990750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.990817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.991064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.991101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.991296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.991380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.991680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.991747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.992010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.992046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.992274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.992357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.992620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.992685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.992947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.992984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.993227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.993292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.993630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.993696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.993963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.994001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.994264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.994349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.994607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.994672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.994976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.995013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.995333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.995400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.995668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.995732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.996028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.996065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.996370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.996438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.996685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.996751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.997047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.997084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.997394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.997461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.997760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.997826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.998135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.998172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.998411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.998448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.998702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.998768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.999064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.766 [2024-07-22 23:24:55.999101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.766 qpair failed and we were unable to recover it. 00:44:19.766 [2024-07-22 23:24:55.999382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:55.999449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:55.999713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:55.999780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.000043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.000081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.000330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.000397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.000697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.000764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.001061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.001098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.001412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.001480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.001793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.001869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.002168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.002205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.002491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.002528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.002787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.002853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.003151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.003187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.003492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.003557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.003826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.003892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.004152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.004188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.004467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.004533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.004834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.004900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.005216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.005271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.005545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.005611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.005813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.005875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.006144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.006181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.006473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.006540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.006837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.006903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.007200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.007236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.007517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.007554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.007828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.007894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.008187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.008224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.008532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.008569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.008814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.008881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.009185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.009222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.009487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.009524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.767 [2024-07-22 23:24:56.009747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.767 [2024-07-22 23:24:56.009813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.767 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.010067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.010104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.010341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.010409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.010734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.010801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.011105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.011142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.011405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.011472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.011775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.011841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.012103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.012140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.012372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.012410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.012642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.012733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.013034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.013083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.013432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.013524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.013895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.013987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.014283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.014345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.014708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.014781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.015072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.015139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.015393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.015437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.015668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.015734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.016044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.016132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.016478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.016530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.016867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.016959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.017329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.017419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.017740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.017792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.018152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.018222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.018550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.018617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.018920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.018957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.019183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.019249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.019601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.019695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.020038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.020133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.020490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.020583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.020952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.021043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.021350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.021403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.021770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.021862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.022181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.022271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.022642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.022736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.023098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.023177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.023514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.023585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.023848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.023915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.768 qpair failed and we were unable to recover it. 00:44:19.768 [2024-07-22 23:24:56.024227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.768 [2024-07-22 23:24:56.024293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.024533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.024570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.024829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.024901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.025255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.025371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.025719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.025811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.026109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.026161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.026502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.026594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.026951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.027044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.027354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.027425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.027691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.027728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.027989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.028055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.028356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.028423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.028701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.028790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.029155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.029206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.029580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.029674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.030038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.030130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.030482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.030573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.030921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.030960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.031182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.031277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.031539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.031605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.031903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.031969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.032284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.032349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.032674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.032763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.033075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.033165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.033563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.033655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.033995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.034071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.034395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.034464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.034769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.034835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.035146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.035212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.035535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.035586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.035961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.036050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.036402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.036493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.036863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.036953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.037297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.037378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.037724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.037793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.038094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.038159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.038448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.038517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.038816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.769 [2024-07-22 23:24:56.038852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.769 qpair failed and we were unable to recover it. 00:44:19.769 [2024-07-22 23:24:56.039101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.039191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.039561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.039657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.039973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.040063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.040415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.040494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.040820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.040893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.041206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.041272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.041587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.041654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.041909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.041947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.042223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.042333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.042621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.042712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.043078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.043166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.043506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.043605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.043975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.044066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.044418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.044488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.044756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.044823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.045128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.045165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.045397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.045461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.045723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.045798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.046158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.046249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.046613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.046661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.046904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.046961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:19.770 [2024-07-22 23:24:56.047342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:19.770 [2024-07-22 23:24:56.047436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:19.770 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.047749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.047819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.048107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.048145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.048393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.048461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.048719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.048786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.049099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.049188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.049488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.049536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.049816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.049916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.050286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.050409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.050737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.050808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.051111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.051149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.051347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.051383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.051586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.051620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.051879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.051942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.052211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.052259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.052546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.052595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.052824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.052873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.053056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.053104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.053296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.053358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.053672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.053763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.054086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.054177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.054526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.054620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.054962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.055040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.055407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.055480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.055775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.055842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.056116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.056182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.056460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.056499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.056729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.056795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.057078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.057171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.057502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.057593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.057876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.057925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.058238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.058345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.058713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.058803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.059166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.059237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.059505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.059573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.059828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.059895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.043 [2024-07-22 23:24:56.060203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.043 [2024-07-22 23:24:56.060268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.043 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.060618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.060710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.061025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.061076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.061412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.061521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.061884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.061975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.062286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.062393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.062753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.062792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.063033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.063107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.063363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.063432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.063671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.063738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.064051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.064130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.064453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.064546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.064900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.064991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.065298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.065412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.065721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.065772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.066119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.066213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.066542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.066630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.067002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.067093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.067404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.067453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.067818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.067909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.068261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.068354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.068666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.068733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.068996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.069033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.069268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.069348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.069614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.069704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.069964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.070053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.070363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.070414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.070773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.070863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.071188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.071278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.071641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.071719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.072019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.072057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.072331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.072398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.072700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.072766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.073073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.073161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.073528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.073580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.073890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.073982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.074353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.074445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.074802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.044 [2024-07-22 23:24:56.074892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.044 qpair failed and we were unable to recover it. 00:44:20.044 [2024-07-22 23:24:56.075247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.075286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.075549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.075615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.075920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.075986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.076252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.076344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.076644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.076694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.077053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.077157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.077532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.077585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.077823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.077913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.078213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.078265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.078642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.078734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.079089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.079178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.079514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.079608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.079949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.080021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.080369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.080441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.080717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.080784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.081017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.081083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.081323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.081360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.081584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.081649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.081921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.082010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.082352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.082426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.082740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.082784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.083023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.083111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.083465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.083557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.083918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.084010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.084355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.084406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.084768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.084858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.085168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.085259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.085664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.085755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.086119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.086201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.086591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.086684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.086991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.087082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.087435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.087528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.087888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.087976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.088332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.088424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.088737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.088826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.089153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.089244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.089620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.089692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.045 qpair failed and we were unable to recover it. 00:44:20.045 [2024-07-22 23:24:56.090044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.045 [2024-07-22 23:24:56.090115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.090411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.090481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.090740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.090805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.091085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.091121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.091404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.091495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.091781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.091873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.092229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.092334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.092679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.092750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.093104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.093194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.093567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.093607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.093813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.093851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.094065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.094102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.094406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.094474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.094727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.094792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.095046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.095135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.095441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.095492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.095804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.095893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.096201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.096290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.096629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.096718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.097058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.097136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.097499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.097590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.097950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.098032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.098368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.098462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.098811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.098862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.099214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.099285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.099597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.099664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.099967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.100034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.100335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.100409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.100682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.100772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.101129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.101219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.101588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.101680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.102014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.102064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.102453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.102548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.102842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.102911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.103162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.103228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.103507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.103551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.103801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.103867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.104196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.046 [2024-07-22 23:24:56.104288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.046 qpair failed and we were unable to recover it. 00:44:20.046 [2024-07-22 23:24:56.104685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.104776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.105116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.105185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.105517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.105607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.105916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.106010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.106342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.106411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.106701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.106754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.107019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.107085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.107383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.107452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.107787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.107878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.108211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.108279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.108650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.108740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.109127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.109219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.109570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.109642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.109933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.109970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.110234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.110300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.110607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.110673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.110935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.111007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.111354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.111407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.111672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.111752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.112107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.112196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.112500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.112587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.112887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.112938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.113253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.113724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.113812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.114181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.114271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.114640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.114712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.115065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.115135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.115454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.115523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.047 [2024-07-22 23:24:56.115825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.047 [2024-07-22 23:24:56.115891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.047 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.116159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.116195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.116483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.116577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.116884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.116973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.117340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.117422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.117789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.117872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.118240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.118350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.118669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.118736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.119036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.119102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.119358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.119402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.119657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.119723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.120058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.120148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.120490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.120581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.120919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.120997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.121331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.121422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.121702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.121792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.122143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.122215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.122494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.122533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.122817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.122883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.123176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.123242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.123565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.123656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.124001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.124051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.124402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.124496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.124864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.124957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.125327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.125419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.125771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.125811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.126050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.126123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.126400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.126467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.126776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.126841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.127146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.127214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.127544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.127630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.127934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.128007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.128404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.128497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.128841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.128893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.129254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.129340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.129628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.129694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.130003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.130069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.130357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.130394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.048 [2024-07-22 23:24:56.130712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.048 [2024-07-22 23:24:56.130804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.048 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.131163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.131252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.131592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.131684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.131983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.132034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.132395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.132491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.132809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.132878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.133176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.133242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.133553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.133591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.133907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.133993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.134335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.134427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.134757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.134847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.135177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.135274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.135668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.135759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.136077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.136169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.136512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.136581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.136832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.136868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.137107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.137169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.137456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.137524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.137797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.137884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.138235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.138286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.138669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.138761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.139093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.139182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.139562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.139654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.140002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.140041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.140260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.140297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.140579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.140647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.140907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.140973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.141230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.141280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.141578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.141670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.141980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.142068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.142422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.142514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.142819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.142870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.143224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.143335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.143633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.143700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.143972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.144038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.144340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.144408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.144669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.144735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.145006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.145097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.049 qpair failed and we were unable to recover it. 00:44:20.049 [2024-07-22 23:24:56.145485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.049 [2024-07-22 23:24:56.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.145896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.145978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.146289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.146402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.146749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.146842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.147110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.147200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.147541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.147591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.147918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.148012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.148348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.148416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.148677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.148744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.149042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.149078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.149342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.149409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.149660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.149752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.150112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.150182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.150550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.150640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.151015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.151105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.151468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.151562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.151907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.151976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.152190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.152227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.152485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.152553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.152800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.152865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.153160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.153251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.153619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.153671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.154027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.154118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.154471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.154563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.154932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.155023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.155332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.155372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.155614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.155652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.155861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.155928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.156182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.156247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.156490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.156527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.156717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.156805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.157163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.157251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.157635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.157735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.158068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.158151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.158483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.158575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.158909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.158978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.159282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.159367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.159677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.159714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.050 qpair failed and we were unable to recover it. 00:44:20.050 [2024-07-22 23:24:56.159926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.050 [2024-07-22 23:24:56.159992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.160299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.160422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.160759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.160849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.161196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.161271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.161663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.161736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.162061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.162141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.162495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.162588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.162901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.162953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.163327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.163397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.163615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.163681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.163969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.164034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.164291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.164336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.164574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.164661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.164971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.165060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.165419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.165510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.165841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.165942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.166300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.166413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.166759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.166831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.167070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.167137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.167418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.167456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.167775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.167842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.168099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.168188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.168600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.168691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.169034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.169085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.169400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.169490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.169850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.169939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.170236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.170305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.170617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.170655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.170912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.170977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.171250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.171335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.171606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.171673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.172016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.172066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.172364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.172458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.172827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.172918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.173246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.173358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.173707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.173746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.174009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.174075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.174330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.051 [2024-07-22 23:24:56.174397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.051 qpair failed and we were unable to recover it. 00:44:20.051 [2024-07-22 23:24:56.174696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.174761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.175037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.175088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.175455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.175550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.175832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.175918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.176292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.176425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.176752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.176804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.177165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.177236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.177516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.177570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.177833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.177898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.178195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.178232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.178502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.178592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.178955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.179045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.179400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.179492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.179832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.179903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.180262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.180396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.180729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.180798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.181013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.181078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.181344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.181398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.181664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.181731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.182002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.182091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.182412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.182505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.182834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.182919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.183279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.183393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.183761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.183851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.184148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.184219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.184514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.184552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.184835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.184901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.185191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.185256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.185517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.185607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.185920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.185970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.186217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.186297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.186671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.186762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.052 qpair failed and we were unable to recover it. 00:44:20.052 [2024-07-22 23:24:56.187083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.052 [2024-07-22 23:24:56.187176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.187507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.187558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.187878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.187966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.188352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.188445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.188817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.188905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.189245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.189348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.189618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.189686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.189996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.190063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.190344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.190411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.190644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.190681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.190885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.190951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.191256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.191369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.191749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.191839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.192131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.192181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.192515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.192607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.192922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.193011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.193341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.193414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.193666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.193703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.193923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.193989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.194251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.194335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.194653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.194724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.194995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.195046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.195337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.195431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.195793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.195882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.196234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.196368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.196726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.196791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.197041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.197108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.197390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.197459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.197759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.197825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.198128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.198198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.198591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.198682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.199034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.199120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.199478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.199570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.199927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.200016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.200336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.200406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.200715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.200781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.201035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.201100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.201367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.053 [2024-07-22 23:24:56.201405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.053 qpair failed and we were unable to recover it. 00:44:20.053 [2024-07-22 23:24:56.201629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.201719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.202058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.202149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.202524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.202614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.202948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.203020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.203378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.203469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.203814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.203884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.204111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.204178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.204473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.204531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.204795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.204861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.205118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.205205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.205539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.205630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.205957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.206002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.206235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.206328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.206660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.206750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.207098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.207169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.207428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.207466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.207648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.207713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.207960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.208025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.208294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.208376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.208672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.208723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.209101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.209190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.209555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.209628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.209998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.210088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.210409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.210449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.210608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.210644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.210807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.210843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.211114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.211180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.211482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.211527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.211833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.211919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.212282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.212420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.212778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.212867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.213189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.213240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.213619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.213708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.214065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.214136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.214441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.214509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.214788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.214825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.215063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.215136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.215442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.054 [2024-07-22 23:24:56.215533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.054 qpair failed and we were unable to recover it. 00:44:20.054 [2024-07-22 23:24:56.215847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.215936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.216271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.216336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.216636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.216725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.217060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.217151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.217511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.217583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.217880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.217918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.218228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.218294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.218539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.218605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.218867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.218954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.219253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.219304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.219675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.219757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.220077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.220168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.220546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.220639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.220981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.221050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.221378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.221449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.221766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.221854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.222228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.222336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.222659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.222711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.223062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.223132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.223357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.223426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.223719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.223785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.224060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.224097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.224372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.224466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.224802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.224893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.225246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.225350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.225664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.225714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.226063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.226154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.226546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.226617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.226881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.226947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.227228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.227272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.227522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.227590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.227859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.227947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.228331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.228422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.228716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.228760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.229050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.229142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.229517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.229608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.229915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.229990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.055 [2024-07-22 23:24:56.230285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.055 [2024-07-22 23:24:56.230333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.055 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.230572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.230638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.230903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.230968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.231266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.231349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.231691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.231742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.231986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.232092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.232441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.232533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.232839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.232929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.233260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.233347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.233718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.233808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.234166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.234255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.234596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.234687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.234984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.235034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.235391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.235461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.235723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.235790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.236081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.236147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.236346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.236384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.236605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.236693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.237010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.237101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.237449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.237541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.237875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.237964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.238348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.238439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.238757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.238829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.239115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.239182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.239496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.239534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.239753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.239818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.240118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.240201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.240614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.240705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.241031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.241074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.241361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.241453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.241807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.241897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.242265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.242351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.242637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.056 [2024-07-22 23:24:56.242680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.056 qpair failed and we were unable to recover it. 00:44:20.056 [2024-07-22 23:24:56.242974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.243041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.243354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.243392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.243546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.243604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.243926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.243976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.244274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.244409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.244730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.244821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.245139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.245229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.245584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.245624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.245833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.245903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.246211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.246277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.246596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.246663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.246899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.246949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.247266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.247377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.247724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.247813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.248123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.248212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.248563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.248639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.249009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.249080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.249390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.249458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.249715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.249781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.250051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.250088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.250289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.250394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.250763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.250854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.251211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.251301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.251653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.251703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.251981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.252072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.252378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.252449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.252730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.252798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.253020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.253058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.253331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.253398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.253698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.253787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.254115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.254206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.254573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.254625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.254993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.255082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.255439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.255530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.255888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.255959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.256260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.256297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.256641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.256707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.057 qpair failed and we were unable to recover it. 00:44:20.057 [2024-07-22 23:24:56.256963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.057 [2024-07-22 23:24:56.257029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.257336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.257426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.257771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.257880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.258232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.258342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.258659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.258749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.259116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.259211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.259576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.259614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.259881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.260205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.260270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.260555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.260621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.260890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.260927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.261160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.261225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.261545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.261583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.261886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.261952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.262214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.262250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.262536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.262604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.262916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.262982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.263239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.263305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.263635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.263671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.263983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.264049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.264371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.264438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.264742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.264809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.265065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.265102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.265348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.265415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.265673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.265739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.265990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.266055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.266366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.266404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.266718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.266784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.267081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.267145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.267465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.267533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.267837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.267874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.268181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.268246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.268568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.268633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.268889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.268955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.269203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.269239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.269445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.269513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.269738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.269803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.270101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.058 [2024-07-22 23:24:56.270166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.058 qpair failed and we were unable to recover it. 00:44:20.058 [2024-07-22 23:24:56.270469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.270506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.270813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.270879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.271134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.271199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.271510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.271578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.271840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.271881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.272168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.272232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.272559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.272597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.272906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.272970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.273271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.273319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.273577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.273642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.273900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.273964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.274269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.274351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.274655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.274691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.274991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.275056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.275325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.275392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.275699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.275764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.276063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.276099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.276412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.276479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.276790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.276855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.277118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.277183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.277485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.277523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.277823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.277887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.278146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.278212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.278483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.278548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.278861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.278897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.279111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.279176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.279488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.279554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.279813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.279878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.280175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.280212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.280699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.280767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.281061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.281127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.281396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.281465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.281751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.281788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.282012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.282077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.282383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.282449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.282750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.282816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.283113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.283150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.283447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.059 [2024-07-22 23:24:56.283513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.059 qpair failed and we were unable to recover it. 00:44:20.059 [2024-07-22 23:24:56.283775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.283842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.284139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.284204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.284494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.284532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.284781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.284846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.285143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.285209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.285518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.285585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.285853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.285889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.286067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.286133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.286434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.286502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.286761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.286826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.287124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.287161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.287470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.287537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.287837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.287902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.288197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.288262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.288541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.288578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.288789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.288854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.289112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.289177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.289436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.289502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.289807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.289843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.290098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.290164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.290440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.290507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.290808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.290875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.291128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.291164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.291376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.291442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.291707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.291773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.292017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.292082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.292392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.292429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.292690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.292756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.293015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.293079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.293392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.293459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.293762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.293798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.294064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.294129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.294390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.294458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.294718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.294793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.295092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.295128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.295391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.295458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.295752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.295817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.296074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.060 [2024-07-22 23:24:56.296138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.060 qpair failed and we were unable to recover it. 00:44:20.060 [2024-07-22 23:24:56.296445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.296483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.296720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.296784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.297081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.297145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.297460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.297526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.297825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.297861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.298124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.298189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.298492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.298559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.298859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.298925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.299225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.299261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.299588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.299654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.299924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.299989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.300283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.300382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.300695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.300731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.301035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.301101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.301402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.301469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.301766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.301831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.302099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.302135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.302399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.302466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.302729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.302793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.303089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.303154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.303417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.303454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.303631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.303696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.303977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.304043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.304298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.304396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.304698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.304734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.304994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.305059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.305358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.305425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.305722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.305787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.306091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.306128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.306434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.306501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.306799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.306865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.307134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.061 [2024-07-22 23:24:56.307199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.061 qpair failed and we were unable to recover it. 00:44:20.061 [2024-07-22 23:24:56.307515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.307552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.307807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.307872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.308128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.308193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.308453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.308529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.308833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.308869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.309128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.309194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.309490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.309556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.309773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.309838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.310142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.310178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.310429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.310496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.310760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.310824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.311134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.311199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.311519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.311557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.311862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.311928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.312227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.312292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.312628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.312693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.312945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.312982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.313152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.313217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.313527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.313565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.313853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.313918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.314219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.314255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.314493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.314559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.314815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.314880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.315183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.315248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.315520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.315557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.315784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.315849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.316159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.316223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.316456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.316522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.316824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.316860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.317132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.317197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.317486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.317524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.317702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.317766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.318032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.318068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.318368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.318406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.318571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.318649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.318949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.319014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.319318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.319356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.062 [2024-07-22 23:24:56.319631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.062 [2024-07-22 23:24:56.319696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.062 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.319963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.320027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.320243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.320332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.320596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.320633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.320895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.320960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.321256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.321337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.321596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.321672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.321942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.321978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.322244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.322326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.322636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.322701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.322997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.323061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.323362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.323399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.323668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.323733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.324038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.324103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.324406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.324473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.324690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.324726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.324975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.325040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.325256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.325336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.325586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.325652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.325861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.325897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.326170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.326236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.326554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.326590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.326915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.326981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.327241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.327277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.327505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.327571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.327785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.327851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.328116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.328181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.328440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.328477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.328687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.328752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.329054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.329118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.329413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.329480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.329743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.329779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.330026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.330091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.330376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.330443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.330708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.330775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.331023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.331060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.331322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.331388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.331692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.331757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.063 [2024-07-22 23:24:56.332061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.063 [2024-07-22 23:24:56.332127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.063 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.332428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.332465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.332722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.332787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.333035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.333099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.333409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.333476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.333795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.333830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.334102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.334167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.334465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.334532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.334790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.334864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.335167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.335204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.335476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.335542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.335796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.335862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.336117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.336182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.336422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.336458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.336667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.336732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.337004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.337069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.337373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.337440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.337685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.337721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.337984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.338049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.338333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.338399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.338660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.338725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.338990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.339024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.339288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.339368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.339668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.339734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.340037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.340104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.340381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.340417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.340607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.340672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.340872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.340936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.341195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.341259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.341545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.341582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.064 [2024-07-22 23:24:56.341802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.064 [2024-07-22 23:24:56.341866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.064 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.342124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.342190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.342487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.342554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.342819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.342854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.343080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.343142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.343460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.343527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.343799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.343863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.344116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.344151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.344437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.344501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.344798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.346 [2024-07-22 23:24:56.344863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.346 qpair failed and we were unable to recover it. 00:44:20.346 [2024-07-22 23:24:56.345177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.345242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.345506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.345541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.345801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.345866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.346157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.346222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.346508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.346575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.346828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.346862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.347000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.347034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.347238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.347274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.347439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.347516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.347814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.347850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.348109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.348174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.348478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.348545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.348809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.348874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.349124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.349160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.349371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.349438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.349663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.349727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.349995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.350060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.350359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.350396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.350666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.350731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.351027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.351090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.351387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.351454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.351757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.351794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.352103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.352169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.352468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.352535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.352838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.352904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.353217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.353253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.353588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.353654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.353963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.347 [2024-07-22 23:24:56.354029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.347 qpair failed and we were unable to recover it. 00:44:20.347 [2024-07-22 23:24:56.354340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.354407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.354711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.354747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.355053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.355119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.355387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.355454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.355714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.355779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.356054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.356090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.356338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.356404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.356723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.356789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.357041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.357106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.357363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.357400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.357623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.357688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.357991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.358057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.358366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.358432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.358732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.358768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.359027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.359092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.359405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.359471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.359785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.359850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.360160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.360197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.360538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.360606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.360864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.360929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.361180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.361255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.361566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.361603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.361844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.361909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.362204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.362269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.362585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.362651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.362954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.362990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.363257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.363338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.348 qpair failed and we were unable to recover it. 00:44:20.348 [2024-07-22 23:24:56.363600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.348 [2024-07-22 23:24:56.363666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.363969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.364033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.364342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.364380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.364597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.364662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.364916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.364980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.365283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.365363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.365669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.365706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.365973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.366038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.366246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.366324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.366634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.366700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.367003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.367039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.367262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.367339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.367643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.367709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.368004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.368070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.368342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.368380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.368625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.368692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.368949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.369014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.369325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.369391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.369647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.369683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.369902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.369967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.370274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.370354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.370610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.370675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.370981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.371017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.371332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.371398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.371700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.371765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.372068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.372134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.372399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.372436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.372698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.372763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.373075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.373140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.373452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.373518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.373832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.373868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.349 qpair failed and we were unable to recover it. 00:44:20.349 [2024-07-22 23:24:56.374090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.349 [2024-07-22 23:24:56.374156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.374415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.374482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.374776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.374851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.375113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.375149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.375391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.375458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.375716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.375781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.376082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.376146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.376412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.376449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.376722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.376787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.377092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.377158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.377462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.377529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.377835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.377871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.378034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.378098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.378370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.378437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.378735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.378800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.379078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.379114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.379425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.379491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.379787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.379852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.380103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.380168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.380435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.380472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.380707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.380772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.381066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.381130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.381391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.381457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.381768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.381804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.382112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.382176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.382428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.382495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.350 [2024-07-22 23:24:56.382796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.350 [2024-07-22 23:24:56.382860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.350 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.383166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.383202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.383521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.383588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.383895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.383960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.384219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.384284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.384588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.384625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.384853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.384918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.385224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.385290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.385568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.385634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.385940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.385976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.386238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.386303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.386633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.386699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.386998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.387063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.387373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.387412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.387726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.387791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.388093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.388159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.388475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.388551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.388859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.388896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.389208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.389273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.389606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.389672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.389971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.390036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.390345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.390381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.390677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.390742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.391043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.391108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.391419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.391486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.391784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.391820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.392073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.392138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.392444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.392511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.392808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.392874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.393174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.393210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.393506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.393572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.351 [2024-07-22 23:24:56.393835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.351 [2024-07-22 23:24:56.393900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.351 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.394199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.394265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.394573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.394610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.394900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.394965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.395226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.395291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.395613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.395678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.395942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.395978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.396194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.396258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.396633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.396736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.397050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.397116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.397371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.397409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.397625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.397689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.397992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.398060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.398359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.398421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.398624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.398660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.398932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.398998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.352 [2024-07-22 23:24:56.399250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.352 [2024-07-22 23:24:56.399329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.352 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.399576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.399640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.399943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.399980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.400244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.400322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.400632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.400697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.400962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.401028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.401293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.401340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.401542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.401614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.401908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.401972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.402267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.402369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.402614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.402651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.402957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.403022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.403341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.403403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.403677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.403743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.404051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.404087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.404390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.404428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.404656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.404722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.405019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.405085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.405340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.405377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.405588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.405653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.405952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.406017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.406328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.353 [2024-07-22 23:24:56.406403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.353 qpair failed and we were unable to recover it. 00:44:20.353 [2024-07-22 23:24:56.406645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.406703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.406976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.407041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.407294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.407374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.407619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.407678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.407951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.407988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.408233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.408297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.408588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.408653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.408914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.408978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.409227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.409263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.409481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.409519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.409713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.409777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.410077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.410143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.410446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.410484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.410693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.410757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.411064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.411130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.411394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.411431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.411582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.411619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.411880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.411946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.412249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.412336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.412570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.354 [2024-07-22 23:24:56.412638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.354 qpair failed and we were unable to recover it. 00:44:20.354 [2024-07-22 23:24:56.412847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.412883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.413129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.413193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.413493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.413531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.413799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.413864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.414164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.414200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.414494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.414531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.414728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.414793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.415043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.415107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.415427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.415464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.415715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.415780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.416039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.416104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.416354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.416391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.416637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.416699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.416958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.417022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.417351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.417388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.417602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.417667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.417918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.417955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.418203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.418268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.418597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.418663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.418964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.419029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.419298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.419345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.419630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.419695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.419997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.420062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.420359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.420426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.420728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.420765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.420929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.420994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.421304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.421387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.421687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.421752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.422048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.422085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.422356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.422423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.422738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.422803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.423105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.423170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.423396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.423433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.423694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.423759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.424019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.424094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.424399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.424465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.424773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.424809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.425069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.425134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.425388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.355 [2024-07-22 23:24:56.425455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.355 qpair failed and we were unable to recover it. 00:44:20.355 [2024-07-22 23:24:56.425770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.425835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.426143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.426179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.426440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.426507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.426809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.426874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.427143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.427207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.427526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.427563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.427798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.427863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.428126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.428191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.428489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.428556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.428875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.428912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.429152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.429218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.429530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.429567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.429871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.429936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.430197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.430234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.430524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.430590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.430890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.430955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.431256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.431338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.431596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.431633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.431850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.431915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.432217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.432283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.432582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.432647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.432902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.432939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.433179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.433246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.433524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.433561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.433842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.433907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.434171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.434237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.434521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.434558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.434874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.434939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.435196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.435260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.435535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.435572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.435802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.435867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.436171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.436235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.436462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.436528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.436826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.436863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.437126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.437175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.437397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.437455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.437684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.437734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.438014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.438050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.438332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.438383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.438611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.438660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.438884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.438934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.439169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.439205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.439438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.439488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.439764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.439813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.440041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.440091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.440377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.440415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.440633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.440698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.440970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.441020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.441245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.441294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.441584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.441620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.441857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.441906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.442176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.442242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.442537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.442614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.442916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.442952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.443215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.443279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.443521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.443570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.443908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.443973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.444234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.444270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.444490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.444527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.356 qpair failed and we were unable to recover it. 00:44:20.356 [2024-07-22 23:24:56.444834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.356 [2024-07-22 23:24:56.444898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.445145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.445209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.445525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.445563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.445887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.445953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.446207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.446273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.446550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.446629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.446932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.446968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.447234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.447299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.447620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.447685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.447983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.448049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.448361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.448399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.448684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.448750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.449049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.449114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.449400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.449450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.449632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.449669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.449840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.449904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.450159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.450234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.450594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.450661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.450970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.451007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.451291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.451381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.451619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.451685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.451983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.452047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.452349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.452387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.452638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.452704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.453007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.453072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.453393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.453443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.453688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.453725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.453999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.454064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.454383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.454433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.454718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.454784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.455091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.455128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.455430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.455480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.455761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.455826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.456134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.456200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.456464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.456515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.456806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.456872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.457137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.457203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.457513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.457565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.457855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.457892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.458077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.458142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.458400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.458452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.458742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.458808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.459068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.459104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.459306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.459390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.459659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.459725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.459988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.460053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.460363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.460402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.460706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.460771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.461060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.461126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.461438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.461489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.461712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.461749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.461990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.462057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.462285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.462380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.462646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.462711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.462975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.463012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.463292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.463381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.463596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.463671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.463887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.463952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.464217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.464254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.464479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.464517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.464732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.464798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.465085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.357 [2024-07-22 23:24:56.465150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.357 qpair failed and we were unable to recover it. 00:44:20.357 [2024-07-22 23:24:56.465459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.465497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.465803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.465870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.466167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.466235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.466520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.466588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.466896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.466934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.467204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.467271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.467516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.467583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.467885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.467951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.468231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.468269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.468486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.468524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.468759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.468795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.468986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.469023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.469209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.469246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.469434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.469471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.469665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.469702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.469929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.469995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.470261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.470368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.470585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.470663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.470968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.471005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.471329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.471390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.471601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.471667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.471942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.472008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.472333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.472395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.472545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.472613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.472865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.472931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.473147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.473211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.473436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.473473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.473679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.473744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.474040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.474105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.474401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.474439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.474678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.474735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.474994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.475060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.475326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.475385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.475608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.475673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.475975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.476017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.476290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.476372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.477967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.478049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.478379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.478421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.478634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.478669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.478887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.478954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.479269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.479352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.479583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.479648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.479899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.479935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.480194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.480259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.480548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.480622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.480929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.480995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.481275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.481375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.481609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.481674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.481991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.482057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.482328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.482396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.482610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.482646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.482950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.483015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.483354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.483392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.483631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.483697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.483999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.484066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.484378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.484415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.484639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.484705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.484989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.485054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.485334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.485392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.485563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.485597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.485774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.485838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.486146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.486212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.486460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.486497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.486695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.486733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.486982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.487047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.487347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.487402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.487624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.487690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.487958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.487995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.488268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.488364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.488586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.488653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.488965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.489030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.358 qpair failed and we were unable to recover it. 00:44:20.358 [2024-07-22 23:24:56.489293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.358 [2024-07-22 23:24:56.489339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.489579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.489645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.489954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.490019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.490333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.490411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.490665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.490701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.490952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.491017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.491273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.491356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.491630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.491695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.491955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.491992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.492242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.492325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.492632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.492696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.492901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.492966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.493237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.493274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.493495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.493533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.493751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.493816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.494115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.494182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.494448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.494485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.494674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.494741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.494962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.495028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.495337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.495404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.495702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.495737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.496036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.496101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.496369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.496435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.496748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.496812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.497128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.497164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.497480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.497546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.497815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.497880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.498188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.498253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.498514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.498550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.498800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.498865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.499174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.499239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.499571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.499638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.499910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.499946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.500123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.500189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.500413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.500479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.500782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.500847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.501105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.501140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.501392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.501465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.501771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.501835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.502136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.502200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.502480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.502517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.502709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.502773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.503071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.503135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.503358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.503400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.503577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.503614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.503803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.503868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.504192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.504257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.504603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.504704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.505039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.505077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.505361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.505430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.505688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.505753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.506053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.506119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.506358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.506395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.506625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.506689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.506993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.507057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.507363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.507429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.507739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.507775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.508083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.508147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.508423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.508490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.508801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.508866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.509175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.509211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.509494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.509531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.509781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.509846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.510148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.510213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.510492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.359 [2024-07-22 23:24:56.510528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.359 qpair failed and we were unable to recover it. 00:44:20.359 [2024-07-22 23:24:56.510724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.511051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.511116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.511395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.511461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.511768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.511805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.512091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.512157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.512429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.512495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.512756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.512821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.513094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.513131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.513286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.513330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.513477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.513511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.513748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.513811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.514111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.514148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.514362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.514428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.514728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.514793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.515072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.515137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.515395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.515432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.515629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.515693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.515992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.516057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.516337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.516413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.516682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.516719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.516979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.517044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.517258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.517337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.517553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.517618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.517885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.517921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.518084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.518148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.518408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.518474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.518773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.518837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.519101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.519137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.519343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.519409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.519677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.519743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.520054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.520118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.520425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.520463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.520677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.520742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.523000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.523076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.523402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.523470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.523779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.523816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.524063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.524128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.524368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.524435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.524701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.524767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.525016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.525053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.525332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.525399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.525621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.525686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.525990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.526055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.526377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.526456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.526767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.526833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.527136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.527202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.527504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.527572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.527885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.527921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.528238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.528303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.528568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.528634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.528909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.528976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.529181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.529216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.529455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.529521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.529823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.529889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.530131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.530197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.530412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.530458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.530667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.530705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.530951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.530988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.531225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.531330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.531658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.531727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.532051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.532117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.532395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.532480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.532793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.532859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.533175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.533212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.533510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.533547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.533759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.533828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.534044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.534107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.534349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.534387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.360 [2024-07-22 23:24:56.534578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.360 [2024-07-22 23:24:56.534643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.360 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.534892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.534959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.535204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.535280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.535565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.535607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.535798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.535862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.536072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.536150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.536392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.536460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.536705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.536747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.536998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.537049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.537226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.537274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.537488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.537540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.537732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.537768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.537919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.537985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.538229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.538293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.538540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.538598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.538790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.538829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.539018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.539068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.539337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.539406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.539638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.539700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.539912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.539973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.540209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.540291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.540519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.540609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.540919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.540962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.541296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.541392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.541716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.541782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.542039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.542109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.542355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.542399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.542547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.542623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.542936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.543009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.543277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.543389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.543620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.543667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.543936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.544000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.544253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.544337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.544536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.544571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.544777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.544814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.545062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.545125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.545431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.545487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.545798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.545863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.546139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.546178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.546425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.546474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.546667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.546743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.547004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.547068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.547366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.547404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.547609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.547674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.547914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.547994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.548251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.548333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.548602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.548639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.548924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.548989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.549301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.549390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.549616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.549680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.549979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.550015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.550255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.550338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.550546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.550621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.550899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.550964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.551222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.551258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.551459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.551495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.551684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.551757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.552030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.552095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.552361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.552398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.552644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.552709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.552995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.553062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.553386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.553435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.553682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.553719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.553916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.553954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.554147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.554211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.554436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.554486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.554745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.554781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.554935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.554979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.555201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.555267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.555524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.555574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.555851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.555891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.361 [2024-07-22 23:24:56.556126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.361 [2024-07-22 23:24:56.556192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.361 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.556451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.556501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.556798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.556865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.557163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.557205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.557469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.557506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.557702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.557744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.558050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.558114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.558399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.558438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.558674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.558739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.559043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.559108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.559327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.559402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.559600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.559636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.559840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.559904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.560183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.560260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.560532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.560581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.560812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.560855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.561036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.561102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.561334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.561403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.561604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.561670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.561970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.562012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.562373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.562424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.562689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.562725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.563067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.563132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.563394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.563431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.563668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.563734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.564033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.564112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.564426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.564476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.564767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.564804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.565018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.565081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.565389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.565440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.565637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.565684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.565960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.565996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.566278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.566341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.566609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.566658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.566966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.567032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.567369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.567405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.567664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.567731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.567993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.568056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.568442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.568491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.568773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.569107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.569170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.569419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.569478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.569757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.569820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.570113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.570168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.570400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.570450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.570731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.570797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.571098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.571162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.571434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.571478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.571719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.571783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.572075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.572141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.572413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.572466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.572663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.572699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.572825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.572860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.573101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.573201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.573444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.573481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.573711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.573746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.574066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.574130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.574402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.574438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.574562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.574624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.574867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.574901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.575121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.575182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.575406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.575441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.575578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.575611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.575876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.575910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.576097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.362 [2024-07-22 23:24:56.576160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.362 qpair failed and we were unable to recover it. 00:44:20.362 [2024-07-22 23:24:56.576425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.576478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.576600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.576681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.576944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.577003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.577298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.577392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.577525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.577559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.577726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.577788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.578077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.578138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.578407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.578462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.578619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.578652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.578862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.578895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.579118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.579179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.579415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.579449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.579646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.579709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.580014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.580049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.580327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.580384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.580522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.580577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.580785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.580854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.581088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.581125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.581285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.581381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.581547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.581581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.581862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.581943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.582234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.582270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.582423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.582468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.582652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.582722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.582987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.583051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.583265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.583299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.583465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.583499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.583700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.583763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.584001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.584080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.584304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.584362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.584491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.584525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.584728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.584789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.585009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.585071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.585368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.585403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.585534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.585567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.585783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.585844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.586095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.586157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.586433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.586468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.586620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.586653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.586898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.586960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.587222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.587284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.587451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.587485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.587645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.587678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.587860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.587919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.588121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.588181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.588359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.588393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.588519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.588583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.588829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.588890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.589183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.589244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.589459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.589493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.589725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.589775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.590066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.590127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.590506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.590542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.590785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.590818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.591021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.591055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.591374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.591435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.591572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.591606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.591777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.591816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.591979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.592040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.592287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.592383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.592517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.592551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.592707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.592739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.592959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.593021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.593268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.593378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.593506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.593540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.593797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.593829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.594005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.594063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.594389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.594424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.594578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.594611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.594846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.594885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.595098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.595159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.595403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.595437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.363 [2024-07-22 23:24:56.595625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.363 [2024-07-22 23:24:56.595657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.363 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.595957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.596000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.596169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.596202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.596414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.596448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.596674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.596743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.597027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.597061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.597241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.597302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.597510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.597545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.597714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.597776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.598030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.598063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.598264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.598341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.598498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.598533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.598796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.598857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.599116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.599150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.599376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.599428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.599542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.599575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.599864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.599926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.600214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.600248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.600474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.600510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.600696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.600758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.601058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.601120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.601393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.601427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.601551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.601626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.601917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.601978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.602238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.602299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.602497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.602531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.602749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.602810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.603074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.603135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.603404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.603439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.603561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.603594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.603814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.603875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.604124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.604185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.604419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.604454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.604572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.604611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.604827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.604888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.605143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.605205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.605421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.605455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.605618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.605652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.605822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.605883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.606110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.606172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.606402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.606437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.606559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.606600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.606810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.606870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.607167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.607228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.607440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.607474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.607607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.607640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.607859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.607927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.608197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.608258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.608456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.608490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.608677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.608711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.608900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.608961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.609211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.609272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.609511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.609545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.609798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.609858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.610151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.610213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.610478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.610512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.610756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.610818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.611081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.611114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.611337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.611402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.611591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.611666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.611884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.611946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.612204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.612238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.612417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.612457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.612640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.612703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.612994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.613055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.364 [2024-07-22 23:24:56.613260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.364 [2024-07-22 23:24:56.613299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.364 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.613455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.613489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.613717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.613778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.614029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.614090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.614368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.614402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.614591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.614626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.614832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.614893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.615110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.615171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.615407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.615442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.615631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.615692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.615984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.616045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.616348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.616415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.616616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.616649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.616914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.616975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.617253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.617330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.617571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.617634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.617945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.617978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.618185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.618246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.618493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.618527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.618769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.618830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.619074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.619108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.619276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.619352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.619627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.619689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.619910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.619971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.620232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.620265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.620424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.620458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.620690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.620751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.621007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.621068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.621356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.621396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.621529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.621582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.621825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.621887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.622115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.622176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.622420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.622454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.622612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.622674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.622928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.622988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.623261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.623333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.623496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.623530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.623692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.623725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.623936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.623997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.624300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.624397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.624524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.624558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.624737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.624798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.625037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.625098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.625400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.625435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.625550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.625584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.625780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.625843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.626074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.626135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.626394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.626429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.626566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.626599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.626814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.626875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.627177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.627238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.627458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.627492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.627702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.627736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.627983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.628044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.628326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.628396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.628517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.628550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.628753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.628786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.629045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.629107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.629371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.629423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.629549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.629591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.629853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.629886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.630102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.630163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.630424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.630459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.630580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.630614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.630766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.630799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.630938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.630997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.631302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.631403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.631521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.631553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.631699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.631737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.631909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.631942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.632196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.632257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.632512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.632582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.365 [2024-07-22 23:24:56.632829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.365 [2024-07-22 23:24:56.632881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.365 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.633233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.633366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.633532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.633613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.633953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.634045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.634376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.634427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.634736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.634827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.635098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.635165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.635407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.635443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.635625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.635659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.635929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.636018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.636292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.636403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.636579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.636669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.636978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.637029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.637349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.637426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.366 [2024-07-22 23:24:56.637617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.366 [2024-07-22 23:24:56.637697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.366 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.637974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.638057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.638307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.638354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.638493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.638529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.638731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.638801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.639138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.639226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.639493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.639543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.639798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.639849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.640125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.640172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.640418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.640457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.640593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.640628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.640779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.640814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.641052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.641101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.641306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.641370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.641629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.641685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.641946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.641983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.642181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.642234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.642425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.642462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.642592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.642626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.642793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.642850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.643083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.643144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.643410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.643446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.643570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.643611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.643764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.643824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.644010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.644066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.644271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.644305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.644471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.644505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.644718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.644775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.644933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.644989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.645208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.645241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.645399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.645434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.645574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.645633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.645 qpair failed and we were unable to recover it. 00:44:20.645 [2024-07-22 23:24:56.645899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.645 [2024-07-22 23:24:56.645953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.646078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.646113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.646277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.646320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.646474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.646529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.646728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.646762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.646874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.646909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.647105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.647139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.647397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.647432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.647576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.647631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.647803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.647859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.648097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.648150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.648326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.648361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.648495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.648556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.648726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.648792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.648924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.648963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.649145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.649179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.649380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.649415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.649524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.649558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.649704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.649738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.649927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.649961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.650165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.650199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.650412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.650466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.650624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.650658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.650848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.650883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.651005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.651039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.651216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.651250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.651452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.651507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.651682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.651736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.651940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.651996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.652242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.652275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.652457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.652521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.652787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.652840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.653080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.653113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.653304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.653346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.653501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.653558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.653816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.653873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.654122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.654178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.654414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.646 [2024-07-22 23:24:56.654472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.646 qpair failed and we were unable to recover it. 00:44:20.646 [2024-07-22 23:24:56.654703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.654755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.655004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.655039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.655249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.655283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.655424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.655487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.655743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.655799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.656016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.656070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.656273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.656315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.656471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.656530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.656717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.656771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.656963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.657017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.657229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.657263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.657446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.657502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.657652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.657707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.657887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.657940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.658150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.658185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.658368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.658403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.658555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.658613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.658770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.658804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.658933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.658966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.659120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.659154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.659292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.659334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.659460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.659494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.659683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.659717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.659877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.659911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.660127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.660161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.660409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.660444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.660564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.660608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.660847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.660881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.661041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.661075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.661319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.661354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.661496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.661553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.661713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.661768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.662001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.662035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.662202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.662237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.662438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.662492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.662723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.662776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.662992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.663046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.663207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.647 [2024-07-22 23:24:56.663241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.647 qpair failed and we were unable to recover it. 00:44:20.647 [2024-07-22 23:24:56.663424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.663479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.663611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.663668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.663856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.663890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.664090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.664123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.664304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.664346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.664483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.664541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.664757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.664812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.665052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.665107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.665336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.665380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.665543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.665606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.665803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.665856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.666058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.666111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.666303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.666346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.666526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.666560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.666695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.666750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.666992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.667045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.667248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.667282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.667432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.667465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.667609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.667662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.667916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.667970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.668132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.668166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.668373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.668444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.668577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.668638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.668887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.668941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.669092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.669125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.669273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.669307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.669448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.669506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.669687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.669721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.669923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.669956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.670120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.670154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.670357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.670392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.670551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.670588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.670827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.670862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.671029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.671063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.671303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.671348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.671506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.671560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.671808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.671863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.648 [2024-07-22 23:24:56.672008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.648 [2024-07-22 23:24:56.672063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.648 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.672255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.672292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.672469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.672530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.672700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.672755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.672967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.673002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.673196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.673230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.673409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.673465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.673622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.673675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.673917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.673972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.674211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.674246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.674471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.674528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.674800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.674855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.675059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.675111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.675375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.675410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.675581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.675637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.675839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.675889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.676078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.676113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.676318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.676363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.676511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.676585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.676836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.676889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.677029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.677082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.677225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.677260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.677415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.677450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.677566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.677600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.677750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.677799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.678015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.678049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.678327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.678367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.678530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.678597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.678812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.678850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.679117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.679176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.649 [2024-07-22 23:24:56.679326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.649 [2024-07-22 23:24:56.679360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.649 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.679511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.679581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.679831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.679885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.680095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.680155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.680369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.680405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.680539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.680610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.680847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.680905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.681034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.681068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.681253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.681287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.681464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.681521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.681689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.681742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.682007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.682060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.682256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.682290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.682480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.682536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.682747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.682802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.682985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.683038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.683278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.683324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.683468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.683524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.683734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.683797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.683995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.684048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.684204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.684238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.684390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.684425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.684673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.684729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.684984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.685036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.685254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.685288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.685444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.685499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.685761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.685822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.686040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.686095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.686279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.686320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.686535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.686610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.686800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.686856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.687078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.687130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.687379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.687414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.688031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.688067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.688346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.688388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.688630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.688665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.688881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.688934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.689188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.689241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.650 [2024-07-22 23:24:56.689367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.650 [2024-07-22 23:24:56.689402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.650 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.689541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.689602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.689756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.689810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.690039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.690093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.690339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.690373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.690589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.690643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.690887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.690938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.691144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.691197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.691389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.691444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.691651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.691705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.691929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.691982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.692171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.692204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.692401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.692459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.692659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.692722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.692939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.692994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.693190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.693223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.693425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.693478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.693706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.693761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.693941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.694009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.694195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.694229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.694396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.694451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.694616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.694676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.694865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.694921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.695053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.695088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.695206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.695240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.695388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.695447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.695653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.695709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.695962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.696015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.696258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.696292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.696535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.696594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.696813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.696865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.697085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.697137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.697378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.697413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.697636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.697690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.697928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.697981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.698220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.698253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.698451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.698491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.698735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.698788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.651 [2024-07-22 23:24:56.698983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.651 [2024-07-22 23:24:56.699037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.651 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.699280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.699319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.699452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.699486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.699731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.699784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.700009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.700061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.700299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.700340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.700533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.700567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.700817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.700870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.701123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.701176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.701349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.701395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.701615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.701669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.701862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.701915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.702175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.702229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.702495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.702549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.702809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.702862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.703108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.703162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.703379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.703440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.703692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.703756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.704006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.704061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.704262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.704296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.704510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.704568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.704788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.704841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.704994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.705049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.705226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.705260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.705477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.705532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.705752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.705804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.706051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.706103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.706292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.706336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.706538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.706596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.706847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.706899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.707093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.707146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.707399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.707455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.707667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.707724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.707920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.707974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.708200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.708234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.708441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.708495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.708661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.708716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.708933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.708987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.652 [2024-07-22 23:24:56.709147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.652 [2024-07-22 23:24:56.709185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.652 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.709408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.709462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.709665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.709732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.709987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.710041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.710249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.710283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.710469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.710522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.710777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.710833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.711058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.711109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.711355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.711389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.711607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.711663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.711898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.711951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.712170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.712225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.712480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.712534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.712789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.712841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.713102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.713155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.713402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.713456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.713636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.713690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.713893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.713947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.714185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.714218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.714371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.714429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.714635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.714690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.714938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.714993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.715234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.715268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.715477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.715530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.715733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.715788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.716036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.716088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.716282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.716326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.716536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.716606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.716823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.716874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.717093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.717144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.717402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.717457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.717683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.717738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.717980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.718032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.718229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.718263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.718476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.718531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.718790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.718842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.719098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.719151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.719396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.719451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.653 qpair failed and we were unable to recover it. 00:44:20.653 [2024-07-22 23:24:56.719634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.653 [2024-07-22 23:24:56.719688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.719931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.719987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.720221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.720266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.720427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.720483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.720732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.720786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.721041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.721094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.721330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.721365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.721584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.721618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.721795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.721848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.722069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.722124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.722275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.722315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.722432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.722466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.722688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.722739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.722981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.723038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.723270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.723303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.723514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.723549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.723712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.723766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.723956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.724010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.724243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.724276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.724474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.724509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.724729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.724783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.725037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.725089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.725295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.725339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.725578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.725612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.725861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.725913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.726169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.726224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.726436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.726470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.726691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.726742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.726991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.727045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.727339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.727374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.727605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.727660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.727878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.727930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.654 [2024-07-22 23:24:56.728146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.654 [2024-07-22 23:24:56.728199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.654 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.728444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.728479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.728663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.728719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.728971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.729024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.729215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.729248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.729506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.729562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.729808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.729862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.730059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.730110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.730324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.730358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.730544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.730577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.730737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.730795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.731009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.731061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.731273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.731307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.731560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.731594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.731791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.731845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.732038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.732092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.732282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.732331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.732519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.732554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.732746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.732799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.732960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.733014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.733221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.733255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.733429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.733486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.733726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.733785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.734039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.734093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.734271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.734305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.734558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.734611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.734863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.734917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.735110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.735165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.735412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.735467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.735730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.735784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.735996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.736049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.736248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.736282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.736500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.736556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.736760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.736813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.737054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.737109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.737347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.737381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.737634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.737689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.737939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.737991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.655 [2024-07-22 23:24:56.738198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.655 [2024-07-22 23:24:56.738232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.655 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.738471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.738505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.738701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.738753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.739003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.739056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.739289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.739333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.739513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.739547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.739751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.739805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.740005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.740057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.740300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.740348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.740593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.740627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.741017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.741050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.741287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.741330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.741582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.741621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.741824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.741876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.742132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.742187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.742390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.742425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.742638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.742688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.742941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.742993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.743226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.743260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.743454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.743488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.743702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.743755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.743912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.743966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.744205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.744239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.744462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.744518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.744728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.744782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.745024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.745076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.745277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.745319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.745519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.745570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.745779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.745831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.746035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.746087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.746339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.746374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.746641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.746693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.746943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.746993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.747160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.747194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.747431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.747466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.747687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.747742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.747947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.747998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.748232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.748265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.656 qpair failed and we were unable to recover it. 00:44:20.656 [2024-07-22 23:24:56.748523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.656 [2024-07-22 23:24:56.748588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.748804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.748858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.749074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.749128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.749339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.749373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.749566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.749627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.749867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.749921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.750129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.750180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.750395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.750457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.750668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.750723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.750967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.751021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.751173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.751207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.751448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.751501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.751751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.751805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.752017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.752069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.752280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.752335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.752544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.752603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.752848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.752902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.753152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.753206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.753478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.753531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.753682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.753736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.753991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.754046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.754240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.754274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.754527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.754580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.754769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.754824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.755069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.755123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.755370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.755405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.755665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.755727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.755971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.756027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.756274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.756315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.756513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.756546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.756728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.756782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.757024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.757078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.757319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.757354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.757548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.757582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.757828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.757881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.758101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.758154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.758380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.758415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.758668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.657 [2024-07-22 23:24:56.758719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.657 qpair failed and we were unable to recover it. 00:44:20.657 [2024-07-22 23:24:56.758957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.759013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.759197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.759230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.759467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.759502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.759734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.759789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.759937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.759991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.760221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.760254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.760522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.760576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.760829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.760883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.761115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.761166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.761383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.761440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.761703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.761756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.761935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.761990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.762196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.762229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.762470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.762522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.762785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.762837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.763088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.763141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.763421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.763497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.763755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.763816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.764053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.764107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.764294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.764337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.764584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.764641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.764893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.764954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.765207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.765263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.765512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.765551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.765727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.765785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.766064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.766100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.766343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.766377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.766632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.766693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.766947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.767000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.767241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.767276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.767509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.767544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.767776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.767835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.768084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.768138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.768347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.658 [2024-07-22 23:24:56.768383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.658 qpair failed and we were unable to recover it. 00:44:20.658 [2024-07-22 23:24:56.768598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.768654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.768890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.768925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.769148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.769211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.769418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.769454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.769663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.769721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.769969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.770036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.770248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.770282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.770501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.770557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.770765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.770823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.771041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.771099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.771301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.771342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.771587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.771622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.771887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.771924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.772205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.772258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.772490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.772544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.772792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.772849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.773105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.773164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.773366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.773402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.773666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.773726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.773928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.773991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.774239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.774274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.774537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.774614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.774818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.774884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.775108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.775164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.775399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.775456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.775637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.775697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.775947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.776214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.776248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.776509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.776565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.776824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.776882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.777133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.777185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.777393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.777449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.777675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.777739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.777997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.778053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.778306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.778348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.659 [2024-07-22 23:24:56.778600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.659 [2024-07-22 23:24:56.778664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.659 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.778887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.778942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.779194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.779246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.779485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.779519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.779733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.779788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.779999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.780053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.780293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.780347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.780592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.780631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.780896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.780950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.781210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.781265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.781448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.781482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.781705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.781763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.781928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.781990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.782200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.782234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.782498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.782610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.782935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.783002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.783350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.783387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.783651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.783724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.784066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.784131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.784436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.784479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.784744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.784808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.785115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.785193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.785471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.785507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.785753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.785821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.786117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.786180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.786454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.786491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.786693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.786733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.786996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.787073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.787385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.787427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.787678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.787742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.788039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.788105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.788420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.788457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.788650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.788716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.789016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.789078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.789388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.789426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.789681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.789760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.790056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.790120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.790422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.660 [2024-07-22 23:24:56.790459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.660 qpair failed and we were unable to recover it. 00:44:20.660 [2024-07-22 23:24:56.790644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.790679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.790918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.790984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.791284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.791370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.791594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.791660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.791949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.792002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.792274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.792364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.792587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.792663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.792967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.793031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.793383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.793424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.793584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.793619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.793813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.793885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.794156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.794219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.794549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.794586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.794820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.794856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.795149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.795213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.795545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.795582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.795836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.795889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.796152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.796209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.796463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.796501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.796706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.796741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.796953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.797007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.797172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.797233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.797478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.797515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.797764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.797822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.798039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.798092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.798331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.798366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.798602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.798637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.798850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.798904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.799152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.799208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.799370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.799407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.799604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.799660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.799920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.799974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.800166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.800200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.800400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.800443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.800648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.800701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.800915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.800968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.801209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.801243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.801506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.661 [2024-07-22 23:24:56.801561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.661 qpair failed and we were unable to recover it. 00:44:20.661 [2024-07-22 23:24:56.801825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.801894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.802173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.802234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.802495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.802550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.802814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.802848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.803070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.803132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.803407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.803460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.803626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.803681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.803908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.803964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.804175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.804209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.804480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.804536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.804748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.804801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.805053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.805087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.805349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.805402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.805662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.805728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.805949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.806003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.806236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.806270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.806533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.806570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.806773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.806829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.807097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.807167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.807447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.807502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.807748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.807803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.808029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.808083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.808277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.808327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.808544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.808602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.808851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.808906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.809120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.809177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.809442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.809505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.809715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.809768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.810017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.810069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.810277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.810319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.810575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.810632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.810876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.810930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.811152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.811205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.811410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.811444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.811681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.811717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.811972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.812024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.812225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.812258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.812532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.662 [2024-07-22 23:24:56.812594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.662 qpair failed and we were unable to recover it. 00:44:20.662 [2024-07-22 23:24:56.812816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.812873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.813121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.813173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.813431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.813491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.813779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.813836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.814086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.814142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.814375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.814410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.814609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.814668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.814945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.815000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.815238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.815274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.815538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.815611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.815867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.815921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.816158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.816212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.816394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.816452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.816695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.816753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.817008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.817062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.817296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.817340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.817571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.817630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.817814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.817868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.818078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.818134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.818341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.818377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.818650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.818718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.818968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.819022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.819219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.819252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.819441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.819483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.819647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.819700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.819941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.819994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.820231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.820267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.820520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.820555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.820805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.820871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.821133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.821188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.821456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.821511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.821745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.663 [2024-07-22 23:24:56.821780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.663 qpair failed and we were unable to recover it. 00:44:20.663 [2024-07-22 23:24:56.822043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.822098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.822335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.822371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.822593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.822649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.822918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.822977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.823222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.823259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.823472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.823507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.823719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.823773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.824044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.824080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.824283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.824335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.824540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.824573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.824815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.824872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.825098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.825151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.825403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.825457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.825725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.825782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.825961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.826018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.826228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.826269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.826530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.826588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.826779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.826836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.827055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.827109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.827342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.827377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.827632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.827686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.827931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.827986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.828224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.828258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.828510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.828545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.828789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.828842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.829092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.829145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.829390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.829447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.829696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.829763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.830010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.830070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.830282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.830323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.830490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.830524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.830766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.830818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.831059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.831113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.831356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.831391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.831650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.831704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.831964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.832018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.832257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.832291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.664 [2024-07-22 23:24:56.832542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.664 [2024-07-22 23:24:56.832576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.664 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.832821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.832874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.833120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.833174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.833331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.833366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.833616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.833680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.833930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.833984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.834184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.834218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.834403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.834439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.834655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.834711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.834965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.835018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.835186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.835220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.835429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.835481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.835665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.835719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.835967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.836022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.836257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.836291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.836554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.836612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.836804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.836858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.837110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.837162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.837328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.837363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.837613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.837675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.837875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.837930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.838128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.838180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.838401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.838461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.838683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.838735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.838953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.839007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.839242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.839276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.839525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.839582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.839782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.839836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.840036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.840089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.840343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.840378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.840616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.840650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.840906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.840965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.841184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.841238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.841478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.841512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.841768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.841822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.841996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.842049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.842236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.842270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.842548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.842609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.665 [2024-07-22 23:24:56.842860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.665 [2024-07-22 23:24:56.842915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.665 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.843140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.843193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.843444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.843498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.843706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.843759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.844007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.844060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.844303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.844355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.844559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.844593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.844856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.844911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.845105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.845157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.845407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.845442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.845611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.845664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.845910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.845963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.846197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.846231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.846478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.846534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.846794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.846854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.847104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.847158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.847404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.847466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.847727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.847782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.847988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.848042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.848276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.848317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.848582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.848633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.848881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.848935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.849143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.849196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.849392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.849453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.849704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.849759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.850003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.850057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.850300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.850341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.850585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.850620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.850827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.850881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.851134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.851188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.851433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.851468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.851724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.851775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.851999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.852054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.852256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.852295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.852533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.852587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.852841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.852893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.853148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.853200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.853400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.853435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.666 qpair failed and we were unable to recover it. 00:44:20.666 [2024-07-22 23:24:56.853680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.666 [2024-07-22 23:24:56.853737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.853942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.853996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.854183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.854217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.854425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.854478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.854726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.854778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.855023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.855077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.855235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.855269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.855496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.855550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.855800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.855854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.856030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.856083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.856281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.856324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.856541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.856602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.856849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.856902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.857125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.857179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.857392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.857456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.857662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.857717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.857963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.858017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.858209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.858243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.858427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.858483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.858752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.858812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.859060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.859111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.859247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.859281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.859538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.859609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.859847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.859900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.860146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.860199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.860441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.860496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.860657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.860710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.860957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.861012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.861243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.861277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.861499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.861554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.861765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.861818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.862034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.862089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.862231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.862265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.862455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.862508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.862730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.862782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.862982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.863042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.863248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.863282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.863446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.667 [2024-07-22 23:24:56.863498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.667 qpair failed and we were unable to recover it. 00:44:20.667 [2024-07-22 23:24:56.863713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.863765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.864005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.864059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.864302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.864344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.864545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.864579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.864765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.864819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.865027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.865080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.865282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.865337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.865545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.865579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.865842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.865893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.866087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.866141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.866410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.866465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.866660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.866713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.866959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.867013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.867221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.867254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.867420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.867456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.867670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.867725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.867980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.868034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.868238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.868272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.868444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.868478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.868737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.868790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.869041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.869093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.869298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.869342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.869491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.869525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.869768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.869822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.870036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.870089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.870328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.870363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.870578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.870612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.870828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.870880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.871087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.871141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.871353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.871389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.871581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.871615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.871788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.871841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.872044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.668 [2024-07-22 23:24:56.872098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.668 qpair failed and we were unable to recover it. 00:44:20.668 [2024-07-22 23:24:56.872349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.872384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.872532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.872584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.872829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.872884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.873090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.873145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.873366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.873406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.873655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.873710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.873953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.874008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.874211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.874245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.874448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.874503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.874710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.874763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.874978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.875265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.875299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.875559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.875616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.875852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.875906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.876156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.876212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.876428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.876463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.876683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.876737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.876943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.876996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.877184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.877218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.877408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.877463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.877665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.877722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.877892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.877947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.878164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.878198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.878411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.878466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.878708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.878762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.878959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.879012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.879211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.879244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.879456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.879509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.879758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.879810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.880058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.880112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.880347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.880382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.880595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.880662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.880863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.880917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.881127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.881181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.881399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.881456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.881656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.881713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.881926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.881979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.669 [2024-07-22 23:24:56.882170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.669 [2024-07-22 23:24:56.882203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.669 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.882456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.882512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.882712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.882765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.882943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.882997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.883210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.883244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.883450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.883505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.883742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.883795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.884046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.884105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.884257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.884290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.884511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.884564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.884780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.884836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.885090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.885145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.885392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.885482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.885733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.885790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.886000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.886054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.886294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.886337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.886544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.886597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.886837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.886890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.887141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.887194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.887374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.887409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.887613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.887668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.887885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.887939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.888175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.888209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.888432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.888486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.888680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.888733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.888900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.888954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.889192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.889226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.889427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.889483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.889730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.889796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.890040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.890091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.890334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.890369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.890600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.890634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.890875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.890927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.891128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.891182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.891437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.891491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.891746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.891799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.891999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.892052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.892291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.670 [2024-07-22 23:24:56.892332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.670 qpair failed and we were unable to recover it. 00:44:20.670 [2024-07-22 23:24:56.892550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.892616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.892880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.892934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.893139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.893193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.893403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.893461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.893721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.893775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.893937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.893990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.894230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.894264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.894449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.894503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.894763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.894817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.894984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.895042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.895275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.895316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.895527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.895584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.895777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.895832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.896102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.896161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.896414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.896469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.896650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.896704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.896953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.897007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.897206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.897240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.897446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.897500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.897706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.897761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.897974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.898029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.898261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.898295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.898558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.898615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.898869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.898922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.899173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.899227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.899441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.899475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.899732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.899789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.900039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.900092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.900341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.900376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.900624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.900677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.900927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.900980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.901186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.901220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.901468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.901504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.901686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.901740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.901945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.901998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.902197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.902421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.902473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.902723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.671 [2024-07-22 23:24:56.902776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.671 qpair failed and we were unable to recover it. 00:44:20.671 [2024-07-22 23:24:56.903024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.903078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.903320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.903355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.903557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.903591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.903802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.903856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.904042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.904097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.904306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.904348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.904519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.904794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.904851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.905058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.905111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.905295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.905338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.905530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.905564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.905805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.905867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.906115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.906166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.906348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.906383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.906607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.906657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.906860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.906913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.907107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.907160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.907315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.907349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.907556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.907621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.907863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.907915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.908119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.908174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.908391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.908453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.908683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.908737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.908987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.909041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.909226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.909259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.909571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.909630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.909849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.909902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.910114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.910169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.910331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.910365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.910573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.910627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.910848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.910901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.911106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.672 [2024-07-22 23:24:56.911161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.672 qpair failed and we were unable to recover it. 00:44:20.672 [2024-07-22 23:24:56.911372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.911438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.911692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.911752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.911999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.912052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.912248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.912282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.912542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.912596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.912815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.912868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.913077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.913132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.913377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.913411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.913660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.913715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.913973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.914025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.914195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.914229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.914478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.914513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.914776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.914838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.915083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.915135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.915326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.915361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.915593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.915627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.915809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.915863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.916067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.916118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.916258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.916292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.916483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.916545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.916795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.916849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.917091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.917144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.917377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.917413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.917665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.917720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.917908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.917961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.918157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.918191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.918446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.918499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.918764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.918817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.919065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.919119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.919324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.919359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.919566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.919600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.919809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.919862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.920023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.920078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.920315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.920350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.920591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.920625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.920794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.920851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.921053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.921106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.673 [2024-07-22 23:24:56.921259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.673 [2024-07-22 23:24:56.921293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.673 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.921511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.921570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.921816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.921870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.922121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.922173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.922327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.922363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.922573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.922630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.922874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.922930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.923140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.923194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.923402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.923456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.923712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.923766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.923931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.923985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.924225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.924259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.924466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.924519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.924768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.924822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.925051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.925104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.925294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.925335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.925536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.925570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.925724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.925778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.925991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.926044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.926233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.926267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.926532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.926593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.926810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.926863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.927101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.927161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.927403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.927459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.927717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.927774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.927942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.927997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.928178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.928212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.928403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.928459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.928712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.928764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.929015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.929068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.929317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.929351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.929614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.929671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.929933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.929987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.930171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.930205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.930445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.930479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.930722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.930782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.931041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.931094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.931340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.931375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.931553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.931619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.931836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.674 [2024-07-22 23:24:56.931891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.674 qpair failed and we were unable to recover it. 00:44:20.674 [2024-07-22 23:24:56.932126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.932178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.932396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.932458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.932720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.932775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.932978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.933031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.933265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.933299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.933524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.933577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.933777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.933833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.934053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.934104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.934261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.934295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.934524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.934582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.934846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.934901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.935126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.935186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.935442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.935498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.935657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.935711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.935887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.935942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.936176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.936210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.936423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.936479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.675 [2024-07-22 23:24:56.936723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.675 [2024-07-22 23:24:56.936777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.675 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.936945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.937000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.937238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.937272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.937527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.937582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.937790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.937844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.938064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.938117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.938315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.938350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.938543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.938577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.938830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.938883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.939123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.939178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.939355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.939391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.939650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.939703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.939873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.939926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.940160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.940197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.940409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.940463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.940712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.940767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.940987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.941042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.941276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.941323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.941534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.941586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.941776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.941831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.942043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.942103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.942322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.942358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.942602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.942640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.942855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.942908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.943167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.943220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.943458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.943493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.943706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.943761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.944004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.944057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.944243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.944276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.944490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.944524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.944686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.944739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.944958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.945011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.945223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.945262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.950 [2024-07-22 23:24:56.945519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.950 [2024-07-22 23:24:56.945573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.950 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.945841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.945894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.946106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.946158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.946380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.946445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.946666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.946719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.946920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.946973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.947219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.947253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.947501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.947555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.947768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.947820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.948016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.948072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.948274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.948307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.948492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.948544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.948789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.948840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.949085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.949140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.949395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.949459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.949668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.949721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.949978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.950012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.950251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.950285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.950527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.950561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.950814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a9b70 is same with the state(5) to be set 00:44:20.951 [2024-07-22 23:24:56.951254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.951369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.951616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.951684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.951943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.952007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.952327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.952363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.952587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.952651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.952946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.953010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.953326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.953387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.953644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.953708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.953922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.953985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.954291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.954382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.954608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.954671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.954982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.955045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.955284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.955378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.955636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.955699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.955998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.956062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.956399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.956435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.956696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.956759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.957059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.951 [2024-07-22 23:24:56.957122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.951 qpair failed and we were unable to recover it. 00:44:20.951 [2024-07-22 23:24:56.957375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.957411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.957677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.957740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.958059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.958123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.958390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.958426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.958685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.958749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.958999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.959062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.959379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.959414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.959594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.959658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.959920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.959983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.960252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.960330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.960594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.960658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.960932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.960995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.961340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.961403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.961673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.961736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.961992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.962055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.962388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.962423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.962638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.962703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.963006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.963070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.963392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.963428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.963595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.963659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.963959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.964023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.964272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.964348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.964623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.964686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.964993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.965056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.965385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.965421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.965635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.965698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.966004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.966066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.966384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.966420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.966626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.966699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.967018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.967081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.967378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.967413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.967612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.967674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.967885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.967948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.968226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.968290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.968532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.968600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.968910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.968973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.969299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.969378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.969595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.952 [2024-07-22 23:24:56.969657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.952 qpair failed and we were unable to recover it. 00:44:20.952 [2024-07-22 23:24:56.969914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.969976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.970293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.970390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.970654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.970717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.970981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.971044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.971338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.971374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.971564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.971628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.971882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.971945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.972198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.972232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.972436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.972502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.972749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.972813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.973113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.973147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.973453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.973488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.973723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.973786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.974082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.974116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.974374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.974409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.974629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.974692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.975003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.975037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.975362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.975398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.975642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.975705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.975969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.976004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.976274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.976363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.976632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.976695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.976991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.977025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.977342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.977406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.977709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.977773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.978035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.978069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.978291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.978366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.978674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.978737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.979047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.979081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.979396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.979462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.979767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.979840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.980107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.980142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.980375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.980440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.980744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.980808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.981112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.981146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.981451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.981516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.981815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.953 [2024-07-22 23:24:56.981879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.953 qpair failed and we were unable to recover it. 00:44:20.953 [2024-07-22 23:24:56.982175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.982209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.982492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.982556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.982867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.982930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.983181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.983215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.983432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.983496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.983754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.983817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.984076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.984110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.984349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.984415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.984693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.984756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.985053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.985088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.985410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.985446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.985628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.985691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.985940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.985975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.986128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.986192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.986499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.986563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.986864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.986899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.987194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.987256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.987552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.987618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.987827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.987862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.988063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.988126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.988427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.988491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.988780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.988815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.989103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.989167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.989418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.989483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.954 qpair failed and we were unable to recover it. 00:44:20.954 [2024-07-22 23:24:56.989775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.954 [2024-07-22 23:24:56.989810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.990085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.990148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.990396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.990431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.990621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.990656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.990937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.991000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.991249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.991324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.991560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.991594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.991828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.991890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.992182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.992245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.992478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.992552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.992809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.992844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.993068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.993132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.993450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.993485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.993760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.993825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.994077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.994141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.994399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.994434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.994617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.994680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.994958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.995021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.995337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.995372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.995642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.995705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.996011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.955 [2024-07-22 23:24:56.996075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.955 qpair failed and we were unable to recover it. 00:44:20.955 [2024-07-22 23:24:56.996390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.996425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.996752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.996815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.997089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.997152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.997414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.997449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.997674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.997738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.998032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.998094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.998319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.998355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.998599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.998662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.998951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.999014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.999328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.999385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:56.999683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:56.999746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.000040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.000104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.000369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.000405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.000638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.000702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.000947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.001011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.001356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.001410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.001599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.001662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.001917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.001981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.002243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.002319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.002567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.002632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.002890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.002954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.003202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.003265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.003564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.003641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.003899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.003962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.004172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.004206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.004420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.004484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.004744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.004808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.005060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.005095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.005327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.005402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.005654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.005719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.005982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.006016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.006256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.006335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.006647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.006711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.007027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.007062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.007363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.007428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.007708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.007772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.008069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.956 [2024-07-22 23:24:57.008104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.956 qpair failed and we were unable to recover it. 00:44:20.956 [2024-07-22 23:24:57.008371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.008436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.008700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.008763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.009027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.009062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.009272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.009348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.009619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.009683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.009991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.010026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.010285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.010360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.010659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.010723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.011021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.011055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.011359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.011425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.011766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.011837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.012147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.012186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.012470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.012507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.012661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.012730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.013013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.013049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.013291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.013341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.013547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.013612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.013917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.013954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.014282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.014378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.014665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.014729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.014985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.015028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.015331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.015398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.015710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.015778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.016067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.016102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.016459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.016527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.016832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.016897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.017181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.017217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.017482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.017554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.017828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.017893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.018155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.018192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.018395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.018460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.018722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.018791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.019127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.019163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.019390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.019459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.019758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.019823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.020103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.020139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.020345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.020383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.957 qpair failed and we were unable to recover it. 00:44:20.957 [2024-07-22 23:24:57.020679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.957 [2024-07-22 23:24:57.020744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.021033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.021070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.021387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.021455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.021771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.021838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.022108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.022145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.022478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.022545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.022803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.022867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.023164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.023200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.023472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.023514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.023778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.023842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.024040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.024077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.024300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.024380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.024640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.024718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.024982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.025017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.025267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.025355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.025666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.025744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.026076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.026113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.026391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.026460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.026774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.026849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.027171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.027207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.027438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.027507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.027824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.027903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.028198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.028234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.028473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.028537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.028846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.028910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.029203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.029239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.029450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.029487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.029676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.029740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.030024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.030060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.030376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.030444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.030708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.030773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.031067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.031110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.031446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.031514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.031812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.031878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.032149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.032185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.032434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.032504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.032845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.032910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.958 [2024-07-22 23:24:57.033201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.958 [2024-07-22 23:24:57.033237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.958 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.033513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.033580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.033874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.033939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.034230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.034285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.034541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.034604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.034873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.034944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.035264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.035379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.035599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.035664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.035930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.035994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.036262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.036368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.036559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.036594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.036807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.036873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.037128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.037171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.037411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.037480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.037794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.037861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.038192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.038227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.038504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.038541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.038818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.038882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.039177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.039233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.039553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.039635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.039944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.040008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.040302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.040359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.040546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.040611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.040894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.040959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.041239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.041280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.041547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.041612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.041856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.041923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.042249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.042283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.042500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.042537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.042852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.042922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.043193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.043229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.043438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.043510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.043820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.043883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.044201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.044276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.044562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.044635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.044894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.044960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.045250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.959 [2024-07-22 23:24:57.045286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.959 qpair failed and we were unable to recover it. 00:44:20.959 [2024-07-22 23:24:57.045622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.045687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.045972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.046038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.046290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.046335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.046601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.046668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.046974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.047038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.047297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.047346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.047617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.047684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.048017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.048083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.048391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.048429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.048695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.048772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.049055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.049121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.049355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.049400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.049673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.049737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.050041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.050112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.050420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.050456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.050677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.050743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.050993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.051057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.051351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.051406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.051718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.051789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.052068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.052133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.052389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.052427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.052689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.052754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.052988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.053052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.053368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.053406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.053666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.053739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.054004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.054069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.054363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.054419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.054683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.054758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.055069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.055137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.055428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.055491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.055750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.055814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.056055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.960 [2024-07-22 23:24:57.056129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.960 qpair failed and we were unable to recover it. 00:44:20.960 [2024-07-22 23:24:57.056417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.056454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.056712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.056776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.057038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.057101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.057399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.057436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.057677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.057721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.057921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.057984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.058260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.058349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.058602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.058668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.058946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.059028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.059377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.059414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.059633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.059712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.060021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.060085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.060378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.060416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.060669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.060734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.061065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.061130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.061386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.061423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.061630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.061710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.062025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.062091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.062391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.062451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.062750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.062814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.063115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.063195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.063474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.063510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.063766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.063833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.064087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.064152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.064476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.064514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.064772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.064807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.065076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.065140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.065382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.065420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.065661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.065706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.065876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.065941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.066545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.066584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.066861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.066929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.067252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.067333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.067654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.067690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.068011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.068077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.068376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.068453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.068716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.961 [2024-07-22 23:24:57.068752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.961 qpair failed and we were unable to recover it. 00:44:20.961 [2024-07-22 23:24:57.069011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.069076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.069281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.069375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.069690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.069725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.069998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.070067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.070341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.070408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.070663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.070700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.070898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.070964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.071237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.071304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.071504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.071539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.071705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.071742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.072019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.072084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.072398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.072437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.072737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.072806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.073127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.073192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.073430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.073470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.073650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.073716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.074022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.074086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.074401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.074438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.074699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.074766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.075097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.075162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.075427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.075464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.075689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.075753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.076066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.076131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.076387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.076426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.076710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.076775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.077084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.077150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.077453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.077489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.077700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.077736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.078005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.078072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.078403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.078439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.078661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.078727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.079037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.079101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.079380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.079424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.079624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.079659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.079892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.079928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.080114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.080152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.080395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.080432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.962 [2024-07-22 23:24:57.080669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.962 [2024-07-22 23:24:57.080705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.962 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.080861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.080902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.081132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.081207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.081483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.081548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.081853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.081926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.082177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.082247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.082599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.082665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.082958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.082994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.083269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.083350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.083614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.083680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.083964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.083999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.084242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.084278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.084556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.084636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.084918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.084954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.085193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.085229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.085498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.085567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.085861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.085897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.086109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.086145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.086364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.086431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.086741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.086778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.087050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.087116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.087436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.087502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.087731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.087771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.087988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.088054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.088366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.088433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.088701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.088737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.088991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.089059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.089346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.089414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.089711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.089747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.090014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.090078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.090381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.090455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.090783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.090818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.091098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.091163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.091431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.091504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.091806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.091841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.092053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.092089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.092407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.092490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.092771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.963 [2024-07-22 23:24:57.092807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.963 qpair failed and we were unable to recover it. 00:44:20.963 [2024-07-22 23:24:57.093008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.093044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.093260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.093355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.093679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.093716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.094025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.094101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.094410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.094476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.094787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.094823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.095131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.095194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.095483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.095556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.095831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.095874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.096146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.096209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.096538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.096607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.096858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.096896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.097132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.097197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.097509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.097550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.097877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.097912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.098191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.098258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.098543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.098634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.098933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.098969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.099163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.099200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.099475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.099516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.099764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.099800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.099968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.100021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.100264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.100346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.100549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.100587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.100852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.100915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.101177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.101243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.101567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.101604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.101892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.101957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.102254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.102333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.102606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.102663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.102986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.964 [2024-07-22 23:24:57.103058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.964 qpair failed and we were unable to recover it. 00:44:20.964 [2024-07-22 23:24:57.103341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.103404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.103624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.103660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.103967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.104032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.104384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.104421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.104680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.104717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.104982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.105046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.105373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.105410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.105602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.105636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.105892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.105958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.106228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.106306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.106566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.106602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.106803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.106839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.107111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.107198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.107542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.107579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.107758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.107824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.108093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.108156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.108454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.108491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.108770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.108833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.109125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.109191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.109506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.109563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.109876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.109942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.110244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.110333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.110572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.110607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.110903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.110969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.111240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.111303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.111598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.111654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.111976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.112048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.112375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.112412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.112640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.112676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.112859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.112894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.113134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.113171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.113355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.113399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.113659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.113694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.113902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.113939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.114181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.114218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.114467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.114507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.114729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.114765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.965 qpair failed and we were unable to recover it. 00:44:20.965 [2024-07-22 23:24:57.114922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.965 [2024-07-22 23:24:57.114963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.115282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.115365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.115602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.115676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.115982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.116047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.116368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.116406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.116605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.116669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.116965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.117036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.117360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.117408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.117669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.117734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.118033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.118110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.118430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.118467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.118699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.118765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.119071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.119134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.119413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.119450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.119699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.119753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.120058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.120133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.120387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.120425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.120668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.120710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.121017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.121080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.121381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.121426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.121632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.121696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.121994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.122059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.122337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.122396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.122607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.122676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.122875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.122938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.123230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.123323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.123619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.123698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.123995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.124060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.124384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.124421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.124607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.124670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.124923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.124986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.125239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.125302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.125554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.125637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.125893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.125955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.126270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.126362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.126531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.126594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.126828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.126891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.966 [2024-07-22 23:24:57.127154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.966 [2024-07-22 23:24:57.127218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.966 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.127473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.127508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.127703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.127767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.128076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.128110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.128397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.128433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.128651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.128715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.128977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.129011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.129257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.129338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.129593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.129657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.129951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.129986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.130235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.130298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.130530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.130598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.130849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.130883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.131133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.131195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.131451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.131487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.131727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.131761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.132040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.132104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.132399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.132435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.132619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.132658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.132936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.132999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.133294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.133384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.133619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.133653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.133872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.133935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.134228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.134291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.134590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.134655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.134972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.135036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.135267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.135345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.135616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.135673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.135929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.135991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.136292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.136402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.136618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.136653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.136893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.136956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.137231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.137295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.137625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.137660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.137875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.137937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.138230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.138293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.138634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.138669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.138985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.967 [2024-07-22 23:24:57.139047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.967 qpair failed and we were unable to recover it. 00:44:20.967 [2024-07-22 23:24:57.139354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.139420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.139717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.139751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.140064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.140126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.140439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.140504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.140824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.140859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.141124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.141187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.141460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.141525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.141838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.141873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.142193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.142256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.142541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.142607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.142919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.142953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.143256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.143354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.143637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.143701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.143944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.143978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.144225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.144288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.144581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.144645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.144904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.144938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.145152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.145216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.145518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.145553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.145855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.145890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.146201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.146274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.146573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.146637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.146946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.146980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.147298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.147381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.147646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.147710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.148011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.148045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.148295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.148379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.148640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.148703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.149002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.149037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.149345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.149410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.149709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.149773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.150079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.150113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.150347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.150411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.150641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.150703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.151023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.151057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.151282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.151384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.151693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.968 [2024-07-22 23:24:57.151755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.968 qpair failed and we were unable to recover it. 00:44:20.968 [2024-07-22 23:24:57.152014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.152048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.152262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.152339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.152642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.152705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.152938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.152972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.153223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.153286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.153558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.153621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.153886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.153920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.154187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.154250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.154542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.154605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.154913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.154947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.155266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.155347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.155604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.155667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.155929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.155963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.156181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.156243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.156521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.156555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.156764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.156798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.156987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.157049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.157341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.157407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.157671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.157705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.157970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.158032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.158335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.158399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.158722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.158772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.159049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.159111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.159407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.159482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.159799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.159833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.160156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.160218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.160536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.160601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.160872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.160906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.161087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.161150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.161421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.161486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.161753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.161787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.161985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.969 [2024-07-22 23:24:57.162047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.969 qpair failed and we were unable to recover it. 00:44:20.969 [2024-07-22 23:24:57.162353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.162417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.162725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.162759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.163013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.163075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.163364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.163429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.163696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.163730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.163996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.164059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.164365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.164430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.164689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.164723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.164965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.165028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.165350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.165414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.165673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.165707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.165922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.165985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.166293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.166367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.166631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.166665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.166880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.166943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.167243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.167305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.167623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.167657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.167920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.167983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.168252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.168343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.168623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.168657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.168897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.168959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.169251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.169329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.169548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.169583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.169813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.169875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.170182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.170244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.170541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.170576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.170807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.170869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.171170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.171233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.171514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.171548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.171798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.171861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.172160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.172222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.172525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.172565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.172873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.172936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.173240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.173303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.173634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.173668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.173978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.174040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.174340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.970 [2024-07-22 23:24:57.174404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.970 qpair failed and we were unable to recover it. 00:44:20.970 [2024-07-22 23:24:57.174708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.174742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.175054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.175116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.175424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.175489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.175804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.175838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.176104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.176166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.176460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.176526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.176841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.176876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.177188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.177250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.177580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.177645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.177903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.177937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.178112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.178175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.178392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.178427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.178665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.178699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.178968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.179031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.179261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.179337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.179650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.179684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.179997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.180061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.180363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.180428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.180726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.180760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.181013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.181075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.181336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.181399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.181669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.181703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.181927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.181989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.182277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.182352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.182621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.182655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.182857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.182920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.183215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.183278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.183593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.183627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.183926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.183990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.184279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.184373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.184684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.184718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.185031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.185095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.185394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.185458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.185765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.185799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.186046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.186119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.186383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.186448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.186698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.186732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.971 [2024-07-22 23:24:57.186944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.971 [2024-07-22 23:24:57.187006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.971 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.187304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.187379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.187681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.187715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.188022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.188084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.188385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.188449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.188759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.188794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.189056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.189119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.189426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.189491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.189792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.189827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.190021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.190083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.190348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.190412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.190719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.190753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.190940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.191003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.191257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.191331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.191605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.191640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.191864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.191927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.192151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.192212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.192486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.192521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.192783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.192845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.193139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.193202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.193485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.193519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.193779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.193842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.194098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.194161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.194464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.194499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.194753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.194825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.195133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.195194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.195504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.195540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.195851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.195914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.196205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.196267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.196576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.196629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.196889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.196952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.197209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.197271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.197509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.197544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.197824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.197887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.198151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.198214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.198499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.198534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.198779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.198842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.199135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.972 [2024-07-22 23:24:57.199197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.972 qpair failed and we were unable to recover it. 00:44:20.972 [2024-07-22 23:24:57.199472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.199507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.199734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.199797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.200054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.200116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.200428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.200463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.200703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.200767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.201072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.201134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.201428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.201464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.201761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.201825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.202137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.202198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.202507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.202541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.202851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.202914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.203144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.203205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.203478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.203513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.203742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.203805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.204009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.204071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.204390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.204425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.204645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.204709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.204962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.205024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.205297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.205372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.205594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.205657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.205949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.206011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.206336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.206405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.206614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.206677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.206897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.206960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.207260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.207295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.207568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.207630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.207939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.208011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.208320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.208356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.208662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.208724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.209036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.209099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.209400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.209436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.209699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.209762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.210020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.210083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.210405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.210440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.210630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.210693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.210997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.211059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.211348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.211406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.973 qpair failed and we were unable to recover it. 00:44:20.973 [2024-07-22 23:24:57.211654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.973 [2024-07-22 23:24:57.211719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.212017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.212079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.212362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.212397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.212618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.212681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.212983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.213047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.213346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.213381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.213633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.213696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.213911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.213973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.214234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.214268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.214506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.214541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.214790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.214853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.215144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.215178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.215475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.215509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.215726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.215789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.216056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.216090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.216355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.216419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.216691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.216755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.217009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.217044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.217246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.217324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.217635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.217698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.217997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.218032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.218332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.218397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.218700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.218764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.219031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.219065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.219269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.219346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.219615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.219679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.219988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.220023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.220284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.220380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.220619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.220682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.220979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.221019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.974 [2024-07-22 23:24:57.221200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.974 [2024-07-22 23:24:57.221262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.974 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.221594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.221661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.221975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.222009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.222327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.222391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.222688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.222751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.223009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.223043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.223298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.223375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.223642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.223705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.223972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.224006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.224277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.224368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.224678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.224740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.224996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.225030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.225238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.225300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.225636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.225700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.225961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.225995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.226245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.226322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.226627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.226690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.226988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.227022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.227330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.227395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.227698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.227761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.228076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.228111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.228424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.228489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.228743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.228806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.229007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.229041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.229212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.229274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.229574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.229640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.229875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.229909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.230105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.230167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.230463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.230528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.230832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.230866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.231126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.231189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.231437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.231501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.231813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.231847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.232164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.232226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.232540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.232605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.232901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.232936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.233244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.233306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.233620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.975 [2024-07-22 23:24:57.233682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.975 qpair failed and we were unable to recover it. 00:44:20.975 [2024-07-22 23:24:57.233979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.234013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.234263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.234352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.234616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.234679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.234975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.235009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.235258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.235333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.235602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.235664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.235902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.235936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.236152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.236215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.236556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.236622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.236921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.236955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.237222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.237285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.237551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.237614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.237910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.237944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.238199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.238262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.238579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.238642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.238902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.238936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.239192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.239255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.239575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.239638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.239948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.239982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.240289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.240366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.240686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.240748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.241063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.241097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.241338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.241402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.241695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.241757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.242048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.242082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.242291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.242369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.242674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.242736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.243039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.243073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.243363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.243429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.243702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.243766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.244064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.244098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.244406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.244470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.244718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.244781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.245035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.245069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.245295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.245370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.245637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.245700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.245964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.245998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.976 [2024-07-22 23:24:57.246177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:20.976 [2024-07-22 23:24:57.246239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:20.976 qpair failed and we were unable to recover it. 00:44:20.977 [2024-07-22 23:24:57.246557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.246626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.246927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.246963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.247210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.247274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.247554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.247627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.247893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.247927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.248119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.248154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.248354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.248419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.248707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.248742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.248935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.248970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.249150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.249185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.249342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.249377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.249539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.249573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.249722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.249756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.249993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.250027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.250232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.250266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.250467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.250 [2024-07-22 23:24:57.250502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.250 qpair failed and we were unable to recover it. 00:44:21.250 [2024-07-22 23:24:57.250692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.250726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.250966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.251000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.251245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.251300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.251608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.251642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.251901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.251964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.252272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.252351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.252652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.252687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.252935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.252997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.253288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.253366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.253683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.253717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.254041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.254103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.254398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.254464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.254763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.254797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.255051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.255114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.255336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.255401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.255700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.255734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.256018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.256080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.256348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.256412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.256712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.256746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.257047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.257110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.257405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.257469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.257777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.257811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.258118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.258179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.258470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.258505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.258656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.258691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.258894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.258957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.259171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.259233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.259551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.259592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.259857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.259919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.260210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.260273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.260546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.260580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.260828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.260891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.261186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.261248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.261525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.261560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.261854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.261923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.262182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.262246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.262529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.262566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.251 qpair failed and we were unable to recover it. 00:44:21.251 [2024-07-22 23:24:57.262889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.251 [2024-07-22 23:24:57.262953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.263263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.263350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.263672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.263709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.263979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.264043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.264354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.264436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.264715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.264751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.265060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.265126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.265433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.265509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.265833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.265869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.266116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.266181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.266491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.266556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.266817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.266853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.267055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.267119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.267373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.267442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.267707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.267742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.267962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.268028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.268348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.268419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.268692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.268728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.268984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.269062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.269380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.269448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.269694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.269737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.269948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.270005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.270325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.270405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.270675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.270710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.270956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.271023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.271356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.271430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.271729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.271764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.271960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.272012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.272332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.272399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.272705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.272741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.272922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.272970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.273177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.273211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.273533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.273570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.273760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.273822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.274136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.274200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.274514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.274551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.274814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.274878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.275170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.252 [2024-07-22 23:24:57.275235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.252 qpair failed and we were unable to recover it. 00:44:21.252 [2024-07-22 23:24:57.275543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.275631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.275954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.276019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.276342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.276404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.276616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.276651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.276865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.276936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.277226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.277290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.277620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.277656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.277827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.277912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.278180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.278246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.278552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.278614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.278891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.278956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.279268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.279376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.279710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.279744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.279967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.280033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.280294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.280383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.280710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.280745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.280998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.281064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.281373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.281440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.281750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.281786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.281994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.282050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.282337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.282402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.282701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.282737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.283040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.283104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.283382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.283449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.283704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.283739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.283995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.284061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.284381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.284446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.284708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.284744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.284972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.285041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.285387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.285424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.285669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.285725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.286038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.286102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.286408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.286503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.286811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.286847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.287116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.287181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.287440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.287517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.287843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.287879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.253 [2024-07-22 23:24:57.288037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.253 [2024-07-22 23:24:57.288102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.253 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.288436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.288502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.288804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.288840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.289154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.289218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.289520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.289586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.289849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.289886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.290122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.290188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.290496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.290565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.290850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.290886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.291112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.291178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.291536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.291603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.291819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.291855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.292071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.292135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.292390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.292463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.292735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.292769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.293016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.293081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.293380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.293447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.293703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.293739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.293948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.294013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.294334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.294389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.294637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.294698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.295018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.295084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.295393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.295470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.295789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.295851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.296164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.296230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.296550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.296615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.296911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.296970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.297212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.297278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.297623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.297688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.297970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.298006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.298328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.298394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.298707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.298774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.299048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.299112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.299415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.299453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.299717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.299787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.300113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.300154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.300419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.254 [2024-07-22 23:24:57.300487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.254 qpair failed and we were unable to recover it. 00:44:21.254 [2024-07-22 23:24:57.300793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.300857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.301121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.301157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.301406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.301472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.301722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.301788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.302005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.302040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.302279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.302363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.302668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.302732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.303040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.303076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.303388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.303462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.303732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.303797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.304094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.304151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.304469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.304535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.304869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.304935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.305225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.305284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.305611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.305675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.305934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.306007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.306322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.306359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.306578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.306644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.306948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.307013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.307288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.307348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.307558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.307594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.307855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.307918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.308214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.308250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.308515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.308551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.308773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.308839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.309149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.309228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.309543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.309579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.309833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.255 [2024-07-22 23:24:57.309899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.255 qpair failed and we were unable to recover it. 00:44:21.255 [2024-07-22 23:24:57.310147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.310188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.310527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.310595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.310897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.310977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.311261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.311296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.311590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.311659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.311960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.312023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.312328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.312394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.312700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.312780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.313100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.313167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.313469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.313533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.313830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.313904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.314213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.314279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.314594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.314662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.314964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.315029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.315284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.315373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.315693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.315729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.315974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.316040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.316272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.316357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.316656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.316692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.316955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.317021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.317350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.317418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.317684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.317722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.318046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.318109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.318411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.318479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.318779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.318834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.319168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.319233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.319593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.319667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.319960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.320015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.320294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.320378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.320646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.320710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.320984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.321020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.321296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.321390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.321711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.321776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.322089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.322162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.322459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.322526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.322751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.256 [2024-07-22 23:24:57.322827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.256 qpair failed and we were unable to recover it. 00:44:21.256 [2024-07-22 23:24:57.323131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.323168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.323417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.323484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.323784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.323863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.324142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.324178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.324437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.324505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.324814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.324878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.325146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.325183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.325485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.325553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.325815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.325879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.326133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.326169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.326333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.326369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.326567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.326603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.326803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.326842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.327138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.327204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.327547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.327639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.327913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.327948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.328169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.328247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.328534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.328569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.328804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.328840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.329159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.329233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.329580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.329647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.329913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.329948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.330199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.330262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.330513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.330594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.330873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.330908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.331109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.331176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.331474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.331540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.331813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.331850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.332079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.332143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.332451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.332519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.332804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.332840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.333067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.333132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.333420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.333483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.333807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.257 [2024-07-22 23:24:57.333844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.257 qpair failed and we were unable to recover it. 00:44:21.257 [2024-07-22 23:24:57.334150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.334227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.334553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.334618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.334928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.334964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.335179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.335252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.335585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.335653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.335897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.335935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.336165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.336229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.336551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.336618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.336901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.336937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.337108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.337144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.337437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.337474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.337627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.337663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.337852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.337887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.338093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.338129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.338428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.338465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.338770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.338834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.339124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.339186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.339474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.339510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.339800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.339863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.340124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.340187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.340463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.340504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.340728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.340791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.341091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.341155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.341459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.341495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.341700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.341764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.342056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.342119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.342414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.342449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.342647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.342711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.342976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.343039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.343341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.343377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.343683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.343746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.344042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.344104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.344406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.344443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.258 qpair failed and we were unable to recover it. 00:44:21.258 [2024-07-22 23:24:57.344749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.258 [2024-07-22 23:24:57.344812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.345073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.345135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.345426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.345462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.345711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.345774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.346079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.346142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.346445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.346481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.346688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.346751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.346995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.347058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.347356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.347392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.347642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.347705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.347957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.348020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.348340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.348405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.348712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.348775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.349026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.349090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.349393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.349429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.349734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.349798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.350055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.350117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.350380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.350416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.350644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.350708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.350960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.351023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.351241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.351276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.351622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.351688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.352092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.352194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.352537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.352608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.352917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.352983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.353193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.353257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.353574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.353610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.353924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.354001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.354324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.354389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.354692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.354727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.354986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.355051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.355357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.355423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.355689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.355724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.355986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.356050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.356323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.259 [2024-07-22 23:24:57.356388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.259 qpair failed and we were unable to recover it. 00:44:21.259 [2024-07-22 23:24:57.356663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.356698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.356938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.357002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.357295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.357371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.357679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.357714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.358022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.358086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.358345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.358410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.358697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.358732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.358997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.359061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.359330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.359394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.359628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.359663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.359924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.359988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.360287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.360383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.360636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.360695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.360961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.361026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.361333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.361398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.361702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.361737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.361950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.362014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.362270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.362348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.362674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.362709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.363032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.363106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.363406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.363471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.363726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.363761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.364007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.364072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.364371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.364436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.364748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.364783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.365097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.365161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.365413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.365479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.365790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.365826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.366140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.366204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.366526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.366592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.366887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.366923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.260 qpair failed and we were unable to recover it. 00:44:21.260 [2024-07-22 23:24:57.367171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.260 [2024-07-22 23:24:57.367234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.367545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.367610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.367891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.367926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.368190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.368253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.368565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.368629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.368937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.368972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.369285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.369376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.369652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.369716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.370023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.370058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.370365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.370401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.370615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.370678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.370894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.370929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.371147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.371211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.371524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.371589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.371882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.371917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.372140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.372204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.372485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.372550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.372848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.372883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.373152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.373216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.373534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.373598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.373859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.373894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.374161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.374225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.374493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.374559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.374853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.374888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.375194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.375258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.375570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.375635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.375898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.375933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.376197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.376260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.376581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.376656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.376950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.376985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.377228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.377292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.377625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.377690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.377946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.377980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.378226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.378290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.378618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.378682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.378979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.379013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.379265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.379343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.261 [2024-07-22 23:24:57.379642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.261 [2024-07-22 23:24:57.379707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.261 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.380007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.380042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.380255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.380333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.380653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.380717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.380990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.381025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.381306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.381404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.381700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.381764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.382025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.382061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.382340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.382406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.382668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.382733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.383030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.383066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.383360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.383425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.383679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.383744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.384049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.384084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.384387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.384453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.384765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.384830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.385133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.385169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.385450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.385515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.385835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.385900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.386206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.386241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.386515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.386551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.386813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.386878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.387149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.387184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.387361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.387427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.387733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.387797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.388104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.388139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.388441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.388507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.388810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.388875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.389103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.389138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.389360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.389426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.389729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.389795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.390095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.390136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.390443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.390509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.390810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.390873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.391177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.391212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.391536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.262 [2024-07-22 23:24:57.391571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.262 qpair failed and we were unable to recover it. 00:44:21.262 [2024-07-22 23:24:57.391826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.391889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.392197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.392232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.392543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.392579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.392780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.392844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.393098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.393132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.393355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.393420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.393737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.393801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.394070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.394105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.394373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.394437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.394761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.394826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.395124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.395159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.395470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.395534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.395839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.395904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.396162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.396198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.396413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.396478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.396790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.396853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.397165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.397200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.397527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.397563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.397819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.397883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.398158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.398192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.398411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.398478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.398785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.398849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.399168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.399203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.399543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.399579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.399888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.399951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.400257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.400292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.400572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.400638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.400942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.401006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.401275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.401325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.401573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.401638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.401935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.401999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.402267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.402302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.402597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.402662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.263 [2024-07-22 23:24:57.402922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.263 [2024-07-22 23:24:57.402985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.263 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.403280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.403323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.403581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.403657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.403923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.403988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.404286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.404330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.404633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.404698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.404955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.405018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.405291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.405336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.405618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.405682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.405973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.406038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.406342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.406378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.406678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.406742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.407038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.407103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.407421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.407456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.407775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.407840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.408042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.408105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.408397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.408433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.408676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.408740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.409039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.409102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.409410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.409446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.409738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.409802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.410059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.410123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.410410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.410446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.410729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.410794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.411100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.411164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.411433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.411469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.411739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.411803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.412068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.412134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.412345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.412380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.412607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.412672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.412969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.413033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.413337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.413373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.413688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.413752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.414056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.414121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.414393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.414429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.414700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.414764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.415016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.415080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.415377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.264 [2024-07-22 23:24:57.415412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.264 qpair failed and we were unable to recover it. 00:44:21.264 [2024-07-22 23:24:57.415712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.415776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.416078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.416142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.416420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.416455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.416672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.416737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.417038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.417112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.417425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.417461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.417636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.417700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.417998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.418062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.418373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.418409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.418661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.418725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.419019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.419083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.419389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.419425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.419732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.419796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.420000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.420064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.420379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.420414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.420739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.420804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.421062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.421126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.421380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.421416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.421615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.421680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.421932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.421997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.422350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.422386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.422637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.422701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.422999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.423063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.423379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.423415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.423669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.423732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.424038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.424101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.424412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.424448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.424667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.265 [2024-07-22 23:24:57.424732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.265 qpair failed and we were unable to recover it. 00:44:21.265 [2024-07-22 23:24:57.425024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.425087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.425383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.425419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.425705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.425769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.426094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.426158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.426464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.426500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.426817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.426881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.427181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.427245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.427562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.427598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.427915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.427979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.428275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.428354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.428592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.428627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.428872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.428935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.429144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.429208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.429536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.429572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.429904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.429969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.430216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.430280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.430588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.430666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.430972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.431036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.431350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.431402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.431577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.431612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.431860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.431925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.432228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.432292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.432579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.432615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.432835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.432900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.433161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.433225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.433550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.433585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.433899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.433963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.434213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.434277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.434587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.434623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.434891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.434955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.435262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.435342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.435605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.435640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.435843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.435907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.436207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.436271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.436584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.436620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.436878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.436942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.437240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.437304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.266 qpair failed and we were unable to recover it. 00:44:21.266 [2024-07-22 23:24:57.437617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.266 [2024-07-22 23:24:57.437653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.437921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.437986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.438285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.438367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.438680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.438714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.439025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.439089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.439323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.439389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.439702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.439738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.439995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.440059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.440374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.440411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.440619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.440653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.440845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.440910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.441159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.441223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.441539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.441575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.441883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.441948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.442249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.442323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.442584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.442619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.442779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.442843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.443136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.443200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.443529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.443565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.443884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.443958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.444181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.444244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.444572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.444608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.444924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.444988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.445243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.445306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.445650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.445686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.445993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.446058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.446331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.446395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.446695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.446730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.447024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.447087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.447390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.447456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.447767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.447802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.448066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.448130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.448380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.448446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.448726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.448762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.267 qpair failed and we were unable to recover it. 00:44:21.267 [2024-07-22 23:24:57.449005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.267 [2024-07-22 23:24:57.449069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.449337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.449402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.449713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.449749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.450057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.450122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.450381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.450446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.450744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.450779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.450992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.451056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.451362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.451427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.451729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.451764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.452058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.452122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.452405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.452470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.452722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.452757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.453010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.453074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.453380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.453446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.453748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.453783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.454056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.454120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.454362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.454428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.454696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.454960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.455024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.455291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.455370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.455667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.455702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.456012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.456076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.456388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.456453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.456760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.456795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.457092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.457156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.457414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.457490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.457803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.457838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.458155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.458219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.458475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.458511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.458715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.458750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.458996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.459060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.459370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.459434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.459703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.459738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.460001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.460065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.460369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.460434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.268 [2024-07-22 23:24:57.460743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.268 [2024-07-22 23:24:57.460778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.268 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.461078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.461143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.461402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.461467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.461736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.461771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.462030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.462094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.462390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.462456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.462767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.462802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.463061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.463125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.463434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.463500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.463817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.463852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.464175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.464240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.464564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.464629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.464940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.464975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.465184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.465247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.465523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.465588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.465884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.465919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.466227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.466290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.466555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.466620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.466928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.466963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.467262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.467341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.467617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.467681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.467977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.468012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.468268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.468347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.468653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.468718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.469020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.469055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.469361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.469427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.469683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.469748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.469996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.470031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.470288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.470367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.470674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.470737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.470999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.471039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.471320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.471385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.471661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.471725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.269 qpair failed and we were unable to recover it. 00:44:21.269 [2024-07-22 23:24:57.471979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.269 [2024-07-22 23:24:57.472014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.472221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.472285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.472596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.472659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.472919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.472954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.473198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.473262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.473569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.473633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.473892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.473927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.474136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.474201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.474471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.474537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.474765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.474801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.475019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.475083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.475390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.475456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.475764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.475799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.476063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.476127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.476381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.476446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.476747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.476782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.477066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.477129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.477440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.477505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.477801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.477836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.478116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.478179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.478450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.478824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.478859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.479149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.479213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.479528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.479594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.479865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.479900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.480154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.480217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.480540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.480606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.480912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.480948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.481258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.481336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.481648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.481712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.270 qpair failed and we were unable to recover it. 00:44:21.270 [2024-07-22 23:24:57.481936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.270 [2024-07-22 23:24:57.481971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.482220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.482285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.482570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.482635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.482928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.482963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.483224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.483289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.483627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.483692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.483962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.483997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.484260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.484350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.484666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.484730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.485035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.485070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.485376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.485412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.485636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.485700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.485948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.485983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.486154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.486219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.486496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.486561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.486864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.486899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.487205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.487269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.487539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.487604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.487864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.487899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.488085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.488149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.488372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.488437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.488710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.488745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.488945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.489009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.489325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.489389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.489683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.489718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.489977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.490041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.490289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.490367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.490671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.490706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.491009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.491073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.491365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.491431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.491732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.491767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.492062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.492126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.492401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.492466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.492755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.492790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.493076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.493141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.493400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.493466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.493759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.493795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.494065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-07-22 23:24:57.494129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.271 qpair failed and we were unable to recover it. 00:44:21.271 [2024-07-22 23:24:57.494394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.494460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.494720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.494755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.494949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.495013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.495325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.495390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.495654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.495689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.495951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.496015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.496280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.496365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.496615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.496672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.496940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.497003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.497295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.497392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.497659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.497693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.497953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.498018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.498322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.498387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.498662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.498697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.498962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.499026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.499360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.499425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.499730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.499766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.500071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.500134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.500432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.500499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.500758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.500793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.501036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.501100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.501405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.501470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.501771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.501806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.502113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.502178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.502468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.502534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.502842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.502877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.503186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.503250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.503556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.503622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.503897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.503932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.504175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.504239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.504526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.504591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.504893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.504928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.505234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.505298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.505580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.505643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.505945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.505980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.506238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.506301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.506643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.506707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.272 [2024-07-22 23:24:57.506959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.272 [2024-07-22 23:24:57.506994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.272 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.507234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.507298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.507579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.507644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.507948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.507983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.508295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.508376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.508637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.508700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.509001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.509036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.509358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.509394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.509571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.509639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.509946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.509981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.510280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.510356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.510660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.510725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.511021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.511062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.511377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.511443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.511706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.511770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.511982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.512032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.512352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.512445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.512829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.512919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.513231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.513281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.513667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.513757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.514126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.514196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.514475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.514512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.514817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.514883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.515198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.515288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.515683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.515783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.516147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.516240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.516603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.516693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.517044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.517123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.517411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.517502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.517863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.517951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.518290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.518366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.518732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.518822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.519176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.519246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.519546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.519583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.519912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.519977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.520305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.520412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.520704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.520753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.521005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.521094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.273 [2024-07-22 23:24:57.521463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.273 [2024-07-22 23:24:57.521554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.273 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.521915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.522009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.522386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.522478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.522847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.522935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.523285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.523393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.523719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.523808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.524168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.524236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.524504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.524541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.524833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.524897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.525155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.525240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.525598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.525647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.526022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.526111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.526409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.526500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.526855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.526931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.527240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.527329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.527632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.527697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.527997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.528051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.528397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.528490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.528861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.528947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.529240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.529289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.529650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.529739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.530065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.530156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.530479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.530516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.530780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.530844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.531111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.531176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.531468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.531518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.531797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.531869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.532248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.532356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.532729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.532824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.274 qpair failed and we were unable to recover it. 00:44:21.274 [2024-07-22 23:24:57.533138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.274 [2024-07-22 23:24:57.533227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.533588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.533679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.534006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.534056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.534391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.534482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.534860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.534946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.535218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.535253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.535468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.535504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.535695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.535744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.536007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.536055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.536330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.536421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.536791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.536890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.537215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.537254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.537601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.537688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.537978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.538042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.538291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.538346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.538672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.538736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.539041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.539108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.539395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.539432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.539779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.539845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.540103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.540167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.540482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.540519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.540845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.540924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.541190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.541254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.541563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.541600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.541917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.541981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.542304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.542388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.542626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.542663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.542892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.542957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.543255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.543342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.543671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.543706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.543888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.543962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.544218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.544283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.544626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.544663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.544896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.544970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.275 [2024-07-22 23:24:57.545278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.275 [2024-07-22 23:24:57.545368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.275 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.545649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.545685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.545901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.545937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.546211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.546274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.546600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.546662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.546948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.547026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.547244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.547334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.547630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.547672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.276 [2024-07-22 23:24:57.547993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.276 [2024-07-22 23:24:57.548057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.276 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.548366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.548406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.548621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.548657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.548878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.548945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.549203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.549268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.549604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.549641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.549884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.549926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.550082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.550117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.550336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.550373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.550630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.550665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.550891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.550938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.551143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.551180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.551427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.551471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.551645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.551681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.551833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.551869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.552059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.552094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.552293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.552345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.552591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.552661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.552959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.553023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.553346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.553418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.553739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.553783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.554029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.554093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.554349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.554417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.554654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.554689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.554905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.554972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.555246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.555324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.555649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.555685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.556000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.556081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.556398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.556434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.556645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.556682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.556937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.551 [2024-07-22 23:24:57.557001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.551 qpair failed and we were unable to recover it. 00:44:21.551 [2024-07-22 23:24:57.557343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.557410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.557701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.557736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.558016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.558080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.558352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.558421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.558711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.558746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.558990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.559056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.559372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.559439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.559729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.559765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.559961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.559997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.560244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.560320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.560636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.560673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.560977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.561045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.561340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.561407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.561675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.561711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.561945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.562010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.562325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.562411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.562728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.562764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.563042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.563108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.563418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.563484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.563775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.563817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.564015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.564077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.564399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.564466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.564764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.564824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.565119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.565182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.565514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.565551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.565801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.565837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.566100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.566163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.566433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.566515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.566768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.566802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.567062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.567129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.567348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.567416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.567720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.567756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.567977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.568041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.568369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.568437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.568729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.568793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.569112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.569178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.552 [2024-07-22 23:24:57.569482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.552 [2024-07-22 23:24:57.569549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.552 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.569856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.569936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.570183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.570249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.570563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.570627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.570931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.570967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.571250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.571334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.571612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.571677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.571979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.572052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.572361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.572429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.572753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.572819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.573138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.573201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.573504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.573539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.573756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.573823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.574081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.574116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.574395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.574431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.574681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.574751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.575035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.575070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.575342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.575410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.575708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.575786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.576061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.576096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.576299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.576347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.576544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.576617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.576920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.576956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.577145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.577188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.577404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.577470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.577755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.577791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.578070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.578133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.578391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.578457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.578769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.578810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.579138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.579204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.579530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.579609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.579913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.579948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.580291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.580374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.580684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.580748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.581060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.581096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.581257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.581351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.581680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.553 [2024-07-22 23:24:57.581746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.553 qpair failed and we were unable to recover it. 00:44:21.553 [2024-07-22 23:24:57.582044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.582100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.582381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.582447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.582698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.582776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.583083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.583118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.583359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.583426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.583679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.583743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.584067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.584104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.584323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.584360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.584637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.584701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.585004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.585040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.585299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.585378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.585599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.585665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.585926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.585968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.586289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.586384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.586650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.586716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.587011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.587067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.587350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.587404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.587601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.587657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.587944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.588009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.588264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.588359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.588587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.588622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.588816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.588852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.589078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.589143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.589435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.589503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.589723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.589758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.590050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.590116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.590411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.590502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.590814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.590850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.590986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.591021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.591176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.591242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.591485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.591520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.591737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.591800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.592100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.592175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.592487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.592523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.592747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.592814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.593074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.593150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.593469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.554 [2024-07-22 23:24:57.593507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.554 qpair failed and we were unable to recover it. 00:44:21.554 [2024-07-22 23:24:57.593715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.593753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.594069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.594134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.594426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.594465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.594622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.594666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.594881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.594946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.595246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.595286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.595550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.595617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.595917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.595984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.596287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.596383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.596617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.596684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.596985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.597050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.597337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.597400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.597627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.597693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.598007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.598074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.598382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.598420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.598640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.598706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.599019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.599085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.599381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.599420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.599664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.599732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.600016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.600082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.600340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.600378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.600645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.600712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.600918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.600983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.601257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.601294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.601563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.601645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.601977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.602044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.602356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.602428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.602631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.602668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.602966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.603034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.603357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.603401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.603651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.555 [2024-07-22 23:24:57.603718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.555 qpair failed and we were unable to recover it. 00:44:21.555 [2024-07-22 23:24:57.604021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.604087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.604412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.604451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.604652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.604707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.605012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.605077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.605380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.605444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.605698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.605764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.606064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.606132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.606445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.606506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.606840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.606907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.607223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.607291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.607569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.607606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.607827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.607898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.608213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.608279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.608613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.608651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.608848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.608911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.609192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.609260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.609549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.609593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.609915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.609980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.610286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.610382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.610645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.610682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.610939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.611005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.611300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.611382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.611554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.611590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.611801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.611865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.612124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.612189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.612503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.612540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.612765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.612831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.613087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.613152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.613416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.613453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.613699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.613765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.614069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.614134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.614394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.614432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.614609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.614673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.614971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.615036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.615362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.615400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.615606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.615672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.615930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.615996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.556 qpair failed and we were unable to recover it. 00:44:21.556 [2024-07-22 23:24:57.616291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.556 [2024-07-22 23:24:57.616376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.616685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.616765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.617066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.617132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.617399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.617437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.617675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.617740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.617990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.618056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.618357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.618395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.618659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.618724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.618970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.619034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.619288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.619334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.619567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.619634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.619933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.619998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.620290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.620344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.620627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.620692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.620993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.621057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.621329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.621367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.621548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.621613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.621874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.621939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.622236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.622272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.622593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.622659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.622955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.623020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.623363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.623401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.623647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.623711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.624014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.624079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.624393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.624430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.624651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.624717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.625011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.625076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.625395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.625433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.625685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.625750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.626041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.626106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.626403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.626440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.626715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.626781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.627072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.627136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.627435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.627473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.627779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.627844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.628143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.628208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.628529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.628567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.628825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.628890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.557 qpair failed and we were unable to recover it. 00:44:21.557 [2024-07-22 23:24:57.629187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.557 [2024-07-22 23:24:57.629252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.629568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.629605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.629912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.629977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.630237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.630324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.630562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.630598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.630841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.630906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.631206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.631270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.631543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.631579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.631845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.631910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.632209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.632273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.632569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.632606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.632880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.632944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.633252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.633333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.633601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.633636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.633857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.633921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.634216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.634280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.634592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.634628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.634850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.634916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.635168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.635233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.635546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.635582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.635881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.635947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.636233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.636298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.636626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.636662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.636871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.636936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.637235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.637301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.637641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.637677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.637991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.638057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.638355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.638421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.638691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.638727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.638930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.638995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.639274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.639351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.639602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.639638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.639846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.639910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.640165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.640229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.640503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.640540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.640781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.640846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.641144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.558 [2024-07-22 23:24:57.641208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.558 qpair failed and we were unable to recover it. 00:44:21.558 [2024-07-22 23:24:57.641521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.641558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.641828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.641893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.642211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.642276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.642576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.642612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.642907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.642972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.643270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.643350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.643665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.643706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.644024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.644088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.644389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.644455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.644768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.644804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.645103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.645169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.645429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.645494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.645775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.645811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.646088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.646153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.646460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.646525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.646823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.646859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.647125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.647190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.647490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.647555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.647863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.647899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.648216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.648280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.648623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.648689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.648992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.649029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.649363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.649430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.649687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.649753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.650053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.650089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.650358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.650426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.650684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.650750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.651044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.651080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.651344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.651410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.651712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.651777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.652078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.652114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.652362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.652428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.652705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.652770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.653034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.653071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.653347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.653413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.653685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.653749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.654055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.654092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.559 qpair failed and we were unable to recover it. 00:44:21.559 [2024-07-22 23:24:57.654388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.559 [2024-07-22 23:24:57.654455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.654753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.654818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.655121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.655158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.655422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.655458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.655719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.655785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.656075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.656112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.656414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.656480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.656748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.656813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.657073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.657109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.657323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.657398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.657650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.657715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.658007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.658044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.658343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.658409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.658654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.658719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.659011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.659047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.659302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.659378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.659642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.659707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.659979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.660014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.660289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.660385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.660633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.660698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.661010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.661045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.661359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.661425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.661734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.661799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.662108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.662144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.662408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.662474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.662789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.662854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.663152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.663188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.663459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.663525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.663820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.663885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.664181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.664217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.664478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.664514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.560 [2024-07-22 23:24:57.664769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.560 [2024-07-22 23:24:57.664835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.560 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.665131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.665167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.665403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.665470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.665762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.665826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.666127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.666163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.666477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.666544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.666806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.666870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.667126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.667161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.667368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.667433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.667706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.667771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.668088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.668123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.668438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.668504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.668806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.668871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.669134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.669170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.669431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.669497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.669797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.669862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.670122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.670158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.670410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.670477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.670782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.670858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.671128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.671164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.671408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.671474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.671783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.671847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.672159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.672195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.672443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.672480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.672705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.672770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.673023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.673059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.673321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.673387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.673645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.673710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.674017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.674053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.674280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.674361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.674571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.674634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.674889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.674925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.675189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.675254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.675532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.675597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.675864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.675900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.676167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.561 [2024-07-22 23:24:57.676231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.561 qpair failed and we were unable to recover it. 00:44:21.561 [2024-07-22 23:24:57.676512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.676579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.676880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.676916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.677225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.677290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.677572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.677637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.677899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.677935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.678147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.678212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.678487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.678552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.678848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.678885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.679187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.679252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.679572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.679638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.679935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.679971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.680234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.680299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.680619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.680684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.681005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.681058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.681338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.681403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.681713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.681777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.682085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.682121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.682440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.682506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.682805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.682870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.683129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.683165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.683384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.683421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.683579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.683654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.683951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.683996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.684299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.684393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.684705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.684770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.685085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.685121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.685387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.685453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.685721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.685786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.686104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.686161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.686462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.686529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.686784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.686848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.687060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.687096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.687358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.687424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.687732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.562 [2024-07-22 23:24:57.687798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.562 qpair failed and we were unable to recover it. 00:44:21.562 [2024-07-22 23:24:57.688064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.688100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.688363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.688429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.688700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.688765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.689024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.689060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.689323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.689388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.689691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.689755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.690060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.690096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.690306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.690385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.690646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.690709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.691006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.691041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.691339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.691406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.691674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.691738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.691999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.692034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.692233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.692298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.692569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.692634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.692939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.692975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.693193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.693257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.693570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.693636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.693936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.693972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.694237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.694272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.694475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.694509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.694719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.694756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.694987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.695023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.695159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.695193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.695421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.695489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.695759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.695824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.696096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.696132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.696383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.696450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.696753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.696829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.697133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.697170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.697416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.697454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.697594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.697649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.697927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.697964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.698247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.698326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.698511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.698547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.698752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.698817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.699102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.699167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.699431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.563 [2024-07-22 23:24:57.699467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.563 qpair failed and we were unable to recover it. 00:44:21.563 [2024-07-22 23:24:57.699681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.699747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.700013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.700079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.700351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.700413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.700575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.700610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.700837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.700903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.701130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.701196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.701465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.701502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.701717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.701782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.702039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.702103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.702409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.702445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.702694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.702758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.703046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.703110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.703338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.703403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.703585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.703622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.703867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.703931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.704235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.704299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.704551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.704587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.705470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.705509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.705716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.705779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.706041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.706077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.706337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.706405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.706572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.706607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.706894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.706960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.707205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.707270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.707551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.707588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.707790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.707854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.708107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.708172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.708396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.708433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.708614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.708688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.708984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.709020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.709364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.709420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.709653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.709720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.709988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.710024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.710305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.710390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.710567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.710603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.710885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.710921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.564 qpair failed and we were unable to recover it. 00:44:21.564 [2024-07-22 23:24:57.711171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.564 [2024-07-22 23:24:57.711235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.711538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.711575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.711789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.711826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.712117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.712182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.712433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.712469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.712666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.712702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.712936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.713000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.713300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.713376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.713650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.713708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.713958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.714023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.714324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.714405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.714567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.714601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.714767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.715110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.715174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.715401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.715438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.715651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.715716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.716019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.716085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.716387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.716424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.716649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.716713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.716972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.717038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.717366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.717424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.717592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.717669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.717968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.718033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.718231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.718293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.718535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.718572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.718810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.718846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.719168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.719233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.719497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.719535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.719777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.719834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.720091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.565 [2024-07-22 23:24:57.720155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.565 qpair failed and we were unable to recover it. 00:44:21.565 [2024-07-22 23:24:57.720466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.720503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.720696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.720732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.720969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.721034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.721298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.721376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.721565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.721601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.721803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.721868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.722129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.722194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.722466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.722503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.722735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.722801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.723097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.723163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.723477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.723513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.723802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.723867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.724163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.724228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.724515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.724552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.724827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.724892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.725154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.725219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.725517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.725554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.725781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.725845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.726152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.726217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.726494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.726531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.726779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.726844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.727141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.727206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.727530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.727567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.727873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.727938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.728231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.728297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.728615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.728651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.728949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.728986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.729157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.729193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.729357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.729393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.729563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.729600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.729752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.729787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.729948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.729989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.730179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.730213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.730400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.730437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.730636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.730701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.730997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.731033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.731286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.731369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.566 [2024-07-22 23:24:57.731522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.566 [2024-07-22 23:24:57.731558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.566 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.731885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.731921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.732199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.732264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.732539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.732602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.732900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.732937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.733234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.733298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.733551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.733607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.733917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.733953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.734267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.734364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.734560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.734623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.734925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.734982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.735233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.735297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.735559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.735639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.735931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.735967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.736174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.736238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.736510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.736547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.736826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.736862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.737129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.737194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.737428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.737465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.737677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.737713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.738002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.738067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.738362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.738400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.738613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.738650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.738908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.738972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.739284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.739370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.739598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.739655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.739953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.740019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.740324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.740384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.740566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.740602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.740789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.740854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.741172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.741237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.741554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.741590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.741835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.741900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.742129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.742194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.742495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.742537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.742777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.742842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.743153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.743218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.567 qpair failed and we were unable to recover it. 00:44:21.567 [2024-07-22 23:24:57.743457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.567 [2024-07-22 23:24:57.743493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.743746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.743811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.744082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.744147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.744415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.744451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.744729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.745070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.745135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.745386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.745423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.745639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.745704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.745998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.746062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.746378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.746415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.746733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.746798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.747102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.747166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.747402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.747439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.749204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.749276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.749570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.749637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.749886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.749923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.750108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.750169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.750414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.750451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.750587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.750623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.750774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1070433 Killed "${NVMF_APP[@]}" "$@" 00:44:21.568 [2024-07-22 23:24:57.750847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.751116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.751178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.751442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.751479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.751682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.751748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:44:21.568 [2024-07-22 23:24:57.752040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.752106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:44:21.568 [2024-07-22 23:24:57.752385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.752634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:21.568 [2024-07-22 23:24:57.752699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.753024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.753091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:21.568 [2024-07-22 23:24:57.753373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.753439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:21.568 [2024-07-22 23:24:57.753734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.753799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.754015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.754049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.754236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.754297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.754542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.754607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.754879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.568 [2024-07-22 23:24:57.754914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.568 qpair failed and we were unable to recover it. 00:44:21.568 [2024-07-22 23:24:57.755100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.755165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.755426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.755492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.755812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.755847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.756146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.756211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.756510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.756576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.756797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.756834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.757099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.757164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.757418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.757483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.757771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.757808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.757991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.758056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.758303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.758398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.758653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.758690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.758915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.758980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.759325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.759385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.759538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.759573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.759787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.759834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.760048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.760094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.760303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.760348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1070906 00:44:21.569 [2024-07-22 23:24:57.760507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:44:21.569 [2024-07-22 23:24:57.760562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.760794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.760841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1070906 00:44:21.569 [2024-07-22 23:24:57.761048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.761082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.761300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.761367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.761531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.761566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1070906 ']' 00:44:21.569 [2024-07-22 23:24:57.761820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.761854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.762097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:21.569 [2024-07-22 23:24:57.762162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.762394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.762442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:21.569 [2024-07-22 23:24:57.762652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.762688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:21.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:21.569 [2024-07-22 23:24:57.762855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.762902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:21.569 [2024-07-22 23:24:57.763109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.763157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.763391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.763428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 23:24:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:21.569 [2024-07-22 23:24:57.763559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.763603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.763800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.763846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.569 qpair failed and we were unable to recover it. 00:44:21.569 [2024-07-22 23:24:57.764119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.569 [2024-07-22 23:24:57.764181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.764449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.764484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.764620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.764655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.764913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.764947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.765149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.765223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.765476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.765527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.765757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.765805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.766073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.766138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.766412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.766474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.766759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.767056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.767094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.767329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.767408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.767642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.767680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.767853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.767888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.768051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.768096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.768371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.768413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.768592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.768659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.768871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.768937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.769148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.769198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.769366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.769418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.769644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.769723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.770014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.770079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.770362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.770402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.770538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.770599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.770822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.770863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.771076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.771143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.771442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.771492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.570 [2024-07-22 23:24:57.771724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.570 [2024-07-22 23:24:57.771761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.570 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.771933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.771982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.772185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.772235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.772472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.772509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.772769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.772838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.773158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.773230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.773492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.773530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.773677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.773751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.774046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.774112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.774335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.774385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.774593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.774659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.774955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.775020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.775246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.775283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.775448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.775484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.775690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.775758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.776046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.776083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.776356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.776407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.776605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.776670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.776981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.777020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.777205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.777271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.777510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.777561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.777793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.777838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.778025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.778095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.778396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.778434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.778587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.778632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.778789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.778854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.779098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.779178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.781208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.781286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.781568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.781649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.783336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.783422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.783618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.783654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.785289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.785405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.785615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.785681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.785982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.786020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.786208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.786251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.786429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.786466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.571 [2024-07-22 23:24:57.786740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.571 [2024-07-22 23:24:57.786778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.571 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.786925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.786994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.787229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.787293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.787548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.787585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.787754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.787819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.788075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.788143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.788356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.788393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.788525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.788573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.788778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.788842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.789073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.789113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.789352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.789429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.789611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.789675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.789937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.789985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.790190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.790255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.790483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.790536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.790788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.790825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.791012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.791071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.791290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.791375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.791583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.791624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.791863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.791928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.792183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.792250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.792484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.792529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.792695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.792772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.793013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.793090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.793321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.793368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.793533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.793610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.793801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.793878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.794204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.794242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.794427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.794466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.794587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.794622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.794863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.794899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.795137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.795203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.795422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.795490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.795751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.795788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.796043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.796123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.796403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.796483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.798192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.798284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.798520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.572 [2024-07-22 23:24:57.798585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.572 qpair failed and we were unable to recover it. 00:44:21.572 [2024-07-22 23:24:57.798891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.798974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.799290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.799340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.799480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.799517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.799654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.799718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.799982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.800023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.800252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.800342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.800560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.800642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.800960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.800999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.801292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.801382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.801582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.801645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.801833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.801895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.802206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.802268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.802479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.802557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.802793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.802854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.803117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.803178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.803395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.803460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.803735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.803813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.804062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.804099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.804298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.804473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.804723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.805061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.805098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.805331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.805401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.805563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.805649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.805933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.805968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.806203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.806276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.806492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.806559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.806871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.806939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.807225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.807267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.807459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.807495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.807613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.807648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.807813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.807850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.808019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.808083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.808301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.808394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.808563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.808598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.808767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.808804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.808926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.809005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.809246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.573 [2024-07-22 23:24:57.809282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.573 qpair failed and we were unable to recover it. 00:44:21.573 [2024-07-22 23:24:57.809500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.809576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.809767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.809830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.810044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.810083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.812392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.812469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.812759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.812828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.813146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.813219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.813511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.813548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.813697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.813763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.814074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.814139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.814411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.814493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.814830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.814895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.815112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.815148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.815373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.815440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.815670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.815749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.816019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.816054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.816216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.816252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.816432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.816499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.816739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.816775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.816939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.817003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.817204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.817272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.817531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.817566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.817697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.817760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.818088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.818154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.818401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.818437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.818590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.818656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.818956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.819019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.819272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.819353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.819579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.819642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.819945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.820011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.820260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.820302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.820509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.820584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.820899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.820963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.821270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.821339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.821535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.821614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.574 qpair failed and we were unable to recover it. 00:44:21.574 [2024-07-22 23:24:57.821894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.574 [2024-07-22 23:24:57.821960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.822307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.822389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.822612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.822673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.822873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.822941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.823292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.823393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.823526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.823562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.823833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.823907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.824212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.824249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.824491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.824526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.824744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.824780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.824988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.825024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.825292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.825380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.825568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.825631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.825896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.825933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.826231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.826329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.826550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.826614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.826914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.826978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.827396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.827467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.827679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.827753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.828070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.828151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.828425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.828470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.828705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.828769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.828980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.829043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.829271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.829317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.829452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.829517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.829825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.829892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.830155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.830190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.830409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.830476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.830797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.830862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.831075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.831110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.831333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.831402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.831641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.831714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.832036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.575 [2024-07-22 23:24:57.832071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.575 qpair failed and we were unable to recover it. 00:44:21.575 [2024-07-22 23:24:57.832292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.832384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.832592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.832656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.832977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.833014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.833183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.833219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.833423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.833460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.833585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.833628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.833851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.833916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.834188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.834253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.834450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.834485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.834739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.834819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.835111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.835174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.835416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.835454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.835645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.835710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.836015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.836092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.836381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.836417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.836542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.836578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.836781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.836844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.837100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.837136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.837339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.837404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.837625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.837698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.838024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.838095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.838351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.838417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.838762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.838829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.839095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.839131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.839344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.839410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.839694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.839761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.840057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.840092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.840382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.840451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.840766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.840830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.841141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.841179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.841410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.841448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.841646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.841690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.841928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.841963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.842129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.842165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.842415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.842478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.842671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.576 [2024-07-22 23:24:57.842732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.576 qpair failed and we were unable to recover it. 00:44:21.576 [2024-07-22 23:24:57.842988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.843053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.843353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.843429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.845273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.845391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.845697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.845762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.846078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.846144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.846388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.846425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.846572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.846637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.846948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.847015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.847269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.847324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.847535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.577 [2024-07-22 23:24:57.847601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.577 qpair failed and we were unable to recover it. 00:44:21.577 [2024-07-22 23:24:57.847654] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:44:21.852 [2024-07-22 23:24:57.847800] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:21.852 [2024-07-22 23:24:57.847898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.847960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.848268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.848303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.848531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.848597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.848841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.848905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.849169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.849211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.849402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.849438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.849603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.849650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.849877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.849912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.850060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.850096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.850266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.850301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.850462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.850499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.850701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.852 [2024-07-22 23:24:57.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.852 qpair failed and we were unable to recover it. 00:44:21.852 [2024-07-22 23:24:57.850998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.851033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.851265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.851384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.851514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.851549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.851705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.851741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.851945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.851979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.852186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.852223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.852475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.852541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.852802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.852838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.853072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.853136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.853415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.853483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.853754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.853796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.854043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.854109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.854306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.854391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.854641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.854677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.854911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.854974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.855230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.855294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.855575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.855610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.855805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.855869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.856176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.856239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.856519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.856555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.856768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.856833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.857110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.857175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.857440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.857476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.857679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.857744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.858008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.858072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.858354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.858391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.858634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.858697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.858992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.859057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.859360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.859396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.859657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.859724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.860037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.860101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.860357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.860393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.860559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.860622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.860900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.860965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.861179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.861220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.861463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.861530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.861800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.861864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.862163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.853 [2024-07-22 23:24:57.862197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.853 qpair failed and we were unable to recover it. 00:44:21.853 [2024-07-22 23:24:57.862417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.862453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.862680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.862745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.863016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.863050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.863267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.863351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.863603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.863669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.863975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.864009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.864334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.864399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.864690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.864755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.865079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.865114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.865360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.865426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.865704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.865768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.866022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.866058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.866288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.866371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.866622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.866687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.866993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.867057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.867367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.867434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.867693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.867758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.868062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.868097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.868414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.868482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.868762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.868827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.869096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.869131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.869358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.869424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.869727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.869792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.870076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.870110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.870393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.870459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.870747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.870812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.871118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.871152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.871419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.871484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.871725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.871790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.872052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.872087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.872364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.872401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.872620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.872684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.872959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.873290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.873387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.873618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.873681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.873962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.854 [2024-07-22 23:24:57.874025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.854 qpair failed and we were unable to recover it. 00:44:21.854 [2024-07-22 23:24:57.874289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.874386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.874670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.874704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.874904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.874969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.875234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.875301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.875575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.875610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.875824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.875887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.876139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.876202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.876456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.876492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.876738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.876814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.877062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.877109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.877338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.877374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.877532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.877567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.877849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.877896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.878081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.878115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.878376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.878426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.878661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.878724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.879041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.879076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.879383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.879433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.879636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.879699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.880003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.880037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.880266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.880326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.880513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.880561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.880719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.880754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.880953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.881000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.881181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.881245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.881526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.881562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.881782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.881846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.882171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.882235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.882527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.882563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.882840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.882904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.883200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.883265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.883537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.883573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.883845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.883911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.884232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.884296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.884553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.884588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.884761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.884823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.885136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.885201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.855 qpair failed and we were unable to recover it. 00:44:21.855 [2024-07-22 23:24:57.885515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.855 [2024-07-22 23:24:57.885549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.885785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.885849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.886086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.886150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.886419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.886460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.886712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.886776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.887052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.887115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.887392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.887427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.887625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.887689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.888005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.888069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.888375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.888412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.888631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.888722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.889026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.889090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.889292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.889386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.889643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.889706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.890013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.890077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.890264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.890299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.890509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.890557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.890858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.890922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.891184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.891218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.891419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.891467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.891717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.891781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.892076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.892140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.892388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.892438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.892651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.892686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.892866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.892929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.894475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.894530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.894781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.894817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.895016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.895080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.895346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.895414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.895620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.895655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.895800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.895863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.896155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.896219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.896454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.896489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.896633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.896696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.896915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.896979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.897160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.897194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.897363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.897411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.856 [2024-07-22 23:24:57.897609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.856 [2024-07-22 23:24:57.897672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.856 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.899246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.899379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.899608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.899673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.901273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.901386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.901617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.901653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.901863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.901930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.902170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.902252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.902461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.902496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.902687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.902751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.902956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.903019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.903335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.903398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.903519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.903555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.903766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.903830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.904140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.904175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.904464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.904514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.904827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.904891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.905176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.905211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.905404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.905453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.905660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.905725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.906043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.906078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.906387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.906435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.906649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.906712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.906992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.907027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.907224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.907287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.907533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.907607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.907862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.907897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.908125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.908190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.908455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.908508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.908770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.908805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.908990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.857 [2024-07-22 23:24:57.909053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.857 qpair failed and we were unable to recover it. 00:44:21.857 [2024-07-22 23:24:57.909388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.909438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.909676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.909711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.909916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.909980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.910297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.910392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.910674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.910709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.910936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.911000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.911260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.911340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.911533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.911568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.911785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.911849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.912071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.912135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.912370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.912405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.912543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.912619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.912855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.912919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.913227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.913291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.913483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.913518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.913759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.913823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.914093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.914133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.914394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.914430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.914611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.914674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.914980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.915015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.915401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.915452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.915620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.915686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.915914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.915949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.916189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.916253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.916527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.916596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.916900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.916934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.917230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.917295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.917483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.917530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.917820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.917855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.918075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.918138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.918395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.918444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.918631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.918666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.918911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.918974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.919279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.919371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.919522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.919556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.919737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.919800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.920024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.920087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.858 [2024-07-22 23:24:57.920354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.858 [2024-07-22 23:24:57.920389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.858 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.920597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.920660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.920926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.920989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.921253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.921288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.921455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.921503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.921735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.921798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.922126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.922178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.922432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.922469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 EAL: No free 2048 kB hugepages reported on node 1 00:44:21.859 [2024-07-22 23:24:57.922618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.922691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.922989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.923052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.923391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.923427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.923607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.923672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.923925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.923990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.924276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.924379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.924503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.924538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.924817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.924881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.925136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.925173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.925402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.925467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.925753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.925818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.926084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.926124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.926365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.926430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.926680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.926743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.926985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.927020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.927206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.927269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.927507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.927572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.929280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.929382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.929579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.929649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.929937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.930002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.930270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.930304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.930523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.930587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.930899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.930964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.931279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.931360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.931545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.931610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.931933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.931982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.932273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.932307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.932550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.932597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.932832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.932880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.933082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.933116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.859 qpair failed and we were unable to recover it. 00:44:21.859 [2024-07-22 23:24:57.933295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.859 [2024-07-22 23:24:57.933391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.933549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.933615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.933881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.933915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.934101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.934164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.934412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.934462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.934670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.934704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.934965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.935028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.935335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.935416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.935617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.935652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.935840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.935903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.936120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.936184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.936440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.936475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.936709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.936773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.937052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.937118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.937393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.937428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.937647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.937710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.937945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.938009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.938320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.938356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.938577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.938641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.938829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.938894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.939070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.939104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.939381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.939437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.939704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.939770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.940051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.940086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.940383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.940432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.940620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.940683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.940995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.941037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.941302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.941386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.941507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.941542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.943085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.943158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.943419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.943469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.945186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.945259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.945519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.945554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.945774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.945839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.946139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.946202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.860 qpair failed and we were unable to recover it. 00:44:21.860 [2024-07-22 23:24:57.946433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.860 [2024-07-22 23:24:57.946469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.946679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.946743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.947006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.947069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.947338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.947375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.947516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.947586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.947898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.947960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.948262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.948297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.948526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.948586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.948879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.948943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.949199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.949233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.949425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.949474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.949669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.949732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.950001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.950041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.950223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.950286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.950492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.950540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.950822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.950857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.951103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.951174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.951422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.951470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.951741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.951775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.952003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.952066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.952339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.952410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.952559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.952633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.952863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.952897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.953039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.953102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.953284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.953375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.953528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.953595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.953899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.953938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.954178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.954241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.954457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.954505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.954733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.954795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.955067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.955101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.955381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.955430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.955676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.955738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.956038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.956101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.956367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.956402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.956550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.861 qpair failed and we were unable to recover it. 00:44:21.861 [2024-07-22 23:24:57.956890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.861 [2024-07-22 23:24:57.956952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.957160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.957223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.957424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.957460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.957636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.957699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.957984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.958048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.958295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.958378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.958648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.958683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.958894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.958958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.959257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.959338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.959524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.959589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.959899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.959934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.960126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.960189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.960432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.960497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.960793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.960856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.961118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.961152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.961376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.961440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.961744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.961807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.962072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.962135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.962390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.962424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.962585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.962647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.962933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.962995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.963205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.963268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.963452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.963486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.963745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.963808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.964069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.964131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.964401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.964469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.964722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.964757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.964955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.965018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.965325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.965390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.965645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.965709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.966008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.966048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.966346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.966412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.862 qpair failed and we were unable to recover it. 00:44:21.862 [2024-07-22 23:24:57.966640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.862 [2024-07-22 23:24:57.966704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.966956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.967018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.967277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.967320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.967582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.967645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.967940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.968003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.968247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.968327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.968632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.968667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.968932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.968995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.969251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.969331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.969578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.969641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.969878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.969912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.970170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.970233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.970490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.970554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.970831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.970894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.971206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.971240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.971526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.971561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.971814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.971877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.972185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.972247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.972508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.972544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.972823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.972888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.973164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.973228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.973545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.973610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.973920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.973955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.974266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.974365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.974641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.974704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.975019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.975083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.975391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.975427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.975681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.975745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.975930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.975994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.976261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.976336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.976544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.976579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.976789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.976853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.977157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.977220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.977468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.977532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.977766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.977801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.977977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.978041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.978351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.978416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.978668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.863 [2024-07-22 23:24:57.978732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.863 qpair failed and we were unable to recover it. 00:44:21.863 [2024-07-22 23:24:57.978994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.979034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.979243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.979306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.979629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.979692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.979985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.980048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.980368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.980403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.980622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.980686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.980911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.980974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.981274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.981356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.981620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.981655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.981855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.981920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.982231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.982294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.982569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.982632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.982894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.982929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.983168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.983232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.983552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.983615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.983864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.983927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.984227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.984262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.984576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.984611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.984881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.984944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.985237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.985301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.985633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.985668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.985968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.986032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.986303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.986385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.986651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.986715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.987010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.987046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.987353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.987417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.987679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.987743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.988019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.988082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.988350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.988386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.988614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.988678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.988945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.989009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.989323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.989388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.989687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.989722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.990019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.990082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.990385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.990449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.864 [2024-07-22 23:24:57.990678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.864 [2024-07-22 23:24:57.990724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:21.864 [2024-07-22 23:24:57.990742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.864 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.991053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.991089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.991376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.991442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.991745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.991810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.992024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.992089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.992358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.992393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.992657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.992721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.992998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.993062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.993360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.993425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.993703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.993738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.993953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.994016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.994281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.994360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.994606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.994670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.994976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.995010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.995233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.995297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.995591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.995655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.995970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.996032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.996305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.996358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.996590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.996664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.996915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.996979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.997232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.997296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.997592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.997627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.997862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.997925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.998190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.998254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.998497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.998561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.998834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.998869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.999099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.999162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.999458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.999523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:57.999834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:57.999898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.000177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.000212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.000413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.000448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.000652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.000716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.000981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.001046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.001325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.001360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.001571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.001635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.001900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.001963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.002255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.002334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.002640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.002675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.865 qpair failed and we were unable to recover it. 00:44:21.865 [2024-07-22 23:24:58.002903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.865 [2024-07-22 23:24:58.002966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.003272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.003353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.003625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.003689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.003886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.003921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.004174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.004238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.004540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.004606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.004887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.004950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.005262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.005297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.005626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.005690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.005994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.006057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.006341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.006406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.006709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.006743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.007044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.007109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.007391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.007457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.007763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.007827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.008087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.008122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.008380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.008446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.008720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.008784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.009075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.009139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.009438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.009473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.009811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.009885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.010141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.010205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.010504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.010570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.010823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.010858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.011068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.011132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.011445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.011510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.011829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.011892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.012159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.012198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.012491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.012557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.012844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.012908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.866 [2024-07-22 23:24:58.013228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.866 [2024-07-22 23:24:58.013295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.866 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.013627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.013670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.013985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.014052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.014362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.014441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.014766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.014824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.015134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.015170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.015422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.015488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.015742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.015809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.016075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.016139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.016442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.016480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.016741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.016805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.017077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.017143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.017440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.017507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.017796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.017833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.018114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.018178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.018479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.018548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.018853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.018917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.019236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.019273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.019431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.019474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.019631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.019696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.019991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.020070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.020390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.020427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.020671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.020708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.020955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.021029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.021352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.021418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.021710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.021745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.022066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.022127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.022394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.022462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.022732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.022793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.023044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.023079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.023262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.023358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.023675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.023738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.024037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.024098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.024368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.024403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.024654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.024719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.025003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.025068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.025380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.025444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.025727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.867 [2024-07-22 23:24:58.025763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.867 qpair failed and we were unable to recover it. 00:44:21.867 [2024-07-22 23:24:58.025992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.026072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.026356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.026424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.026752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.026823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.027147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.027182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.027328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.027414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.027637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.027700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.028015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.028097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.028378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.028414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.028682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.028757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.029031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.029094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.029349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.029418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.029661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.029698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.029953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.030019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.030243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.030307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.030629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.030696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.031001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.031063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.031331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.031400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.031715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.031787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.032101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.032165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.032430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.032467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.032743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.032807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.033087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.033154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.033424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.033488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.033766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.033802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.034018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.034081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.034392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.034467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.034730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.034794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.035093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.035149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.035449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.035528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.035879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.035945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.036200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.036264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.036544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.036581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.036786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.036828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.037029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.868 [2024-07-22 23:24:58.037109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.868 qpair failed and we were unable to recover it. 00:44:21.868 [2024-07-22 23:24:58.037397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.037467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.037716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.037755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.038027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.038092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.038379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.038454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.038735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.038800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.039101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.039138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.039370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.039438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.039711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.039780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.040100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.040166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.040362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.040408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.040611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.040646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.040897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.040968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.041288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.041379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.041690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.041726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.042034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.042113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.042435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.042504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.042818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.042898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.043215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.043250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.043586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.043623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.043817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.043855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.044049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.044112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.044420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.044459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.044680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.044746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.045049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.045114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.045399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.045466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.045739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.045776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.046007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.046072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.046349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.046418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.046685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.046749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.047047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.047083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.047357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.047424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.047703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.047769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.048067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.048132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.048420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.048457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.048712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.048777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.049091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.049158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.049466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.049532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.869 qpair failed and we were unable to recover it. 00:44:21.869 [2024-07-22 23:24:58.049821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.869 [2024-07-22 23:24:58.049858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.050074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.050153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.050478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.050545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.050855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.050924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.051225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.051261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.051498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.051537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.051789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.051863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.052131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.052197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.052467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.052505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.052735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.052800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.053096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.053166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.053457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.053526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.053803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.053839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.054094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.054159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.054450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.054532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.054869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.054933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.055231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.055268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.055437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.055478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.055742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.055808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.056054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.056123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.056421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.056459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.056719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.056797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.057058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.057122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.057431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.057499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.057748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.057784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.058066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.058134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.058450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.058517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.058799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.058866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.059150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.059187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.059493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.059561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.059835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.059916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.060234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.060297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.060645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.060683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.060956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.061020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.061289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.061378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.061660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.061724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.062032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.062069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.062384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.870 [2024-07-22 23:24:58.062451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.870 qpair failed and we were unable to recover it. 00:44:21.870 [2024-07-22 23:24:58.062783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.062850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.063110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.063187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.063482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.063520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.063776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.063855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.064168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.064246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.064561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.064629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.064933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.064991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.065234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.065299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.065626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.065700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.066024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.066088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.066361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.066399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.066706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.066776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.067069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.067135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.067398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.067465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.067728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.067765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.067966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.068042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.068394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.068462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.068740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.069129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.069186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.069495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.069565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.069828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.069892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.070190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.070257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.070566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.070611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.070949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.071016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.071271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.071362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.071647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.071713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.072016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.072060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.072377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.072445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.072751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.072816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.073115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.073179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.871 [2024-07-22 23:24:58.073450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.871 [2024-07-22 23:24:58.073487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.871 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.073735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.073799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.074068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.074135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.074434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.074501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.074773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.074810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.075087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.075150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.075405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.075442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.075687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.075752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.076012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.076055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.076345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.076411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.076718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.076785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.077098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.077164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.077471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.077508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.077815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.077885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.078221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.078286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.078618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.078691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.079019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.079055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.079284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.079386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.079701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.079780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.080086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.080151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.080402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.080439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.080685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.080750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.081018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.081085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.081386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.081453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.081742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.081779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.082002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.082066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.082384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.082454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.082771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.082836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.083145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.083213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.083527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.083565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.083818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.083882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.084199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.084266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.084590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.084632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.084958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.085025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.085351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.085424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.085722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.085802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.086116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.086165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.086509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.086555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.872 qpair failed and we were unable to recover it. 00:44:21.872 [2024-07-22 23:24:58.086858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.872 [2024-07-22 23:24:58.086942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.087276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.087381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.087714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.087807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.088110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.088175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.088487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.088555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.088852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.088917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.089235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.089269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.089633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.089699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.089966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.090031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.090346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.090412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.090722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.090757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.091070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.091135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.091454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.091520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.091781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.091845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.092146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.092181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.092491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.092558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.092880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.092945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.093238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.093300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.093642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.093677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.093944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.094008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.094303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.094387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.094645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.094710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.094984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.095018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.095197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.095232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.095474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.095540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.095836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.095899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.096203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.096237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.096448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.096515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.096788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.096852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.097165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.097229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.097576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.097612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.097825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.097890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.098159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.098223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.098543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.098608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.098864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.098899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.099092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.099157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.099409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.099475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.873 qpair failed and we were unable to recover it. 00:44:21.873 [2024-07-22 23:24:58.099729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.873 [2024-07-22 23:24:58.099793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.100091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.100125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.100387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.100454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.100767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.100833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.101098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.101161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.101407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.101449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.101649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.101714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.101986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.102049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.102293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.102372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.102660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.102695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.103014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.103077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.103384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.103450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.103713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.103777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.104078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.104134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.104393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.104460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.104727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.104790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.105069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.105132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.105404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.105441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.105721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.105784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.106046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.106109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.106415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.106481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.106767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.106802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.107105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.107170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.107468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.107534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.107832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.107895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.108091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.108126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.108331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.108401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.108630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.108694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.108940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.109004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.109277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.109323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.109622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.109687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.109992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.110056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.110387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.110453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.110716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.110752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.111016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.111080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.111293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.874 [2024-07-22 23:24:58.111390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.874 qpair failed and we were unable to recover it. 00:44:21.874 [2024-07-22 23:24:58.111662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.111725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.112027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.112062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.112368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.112436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.112697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.112761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.113036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.113100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.113363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.113399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.113658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.113722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.114020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.114084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.114392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.114457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.114761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.114801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.115104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.115169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.115447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.115513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.115812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.115875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.116140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.116174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.116358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.116423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.116699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.116763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.117037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.117101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.117370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.117404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.117668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.117732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.118065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.118129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.118380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.118444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.118720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.118756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.120664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.120737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.120994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.121061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.121341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.121408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.121675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.121710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.121963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.122027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.122344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.122411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.122646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.122710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.123015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.123051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.123367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.123433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.123707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.123770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.125252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.125343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.125629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.125665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.125889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.125953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.126254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.126340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.126666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.126730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.875 [2024-07-22 23:24:58.127010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.875 [2024-07-22 23:24:58.127045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.875 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.127284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.127371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.127657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.127721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.127976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.128040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.128275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.128328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.128529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.128594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.128890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.128954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.129254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.129338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.129564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.129599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.129866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.129929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.130229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.130293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.130571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.130636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.130856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.130897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.131118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.131182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.131407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.131473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.131773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.131838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.132114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.132149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.132421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.132486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.132784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.132848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.133110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.133174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.133408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.133444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.133618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.133682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.133894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.133959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.134204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.134268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.134589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.134625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.134852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.134916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.135226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.135290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.135575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.135641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.135901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.135936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.136106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.136170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.876 qpair failed and we were unable to recover it. 00:44:21.876 [2024-07-22 23:24:58.136391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.876 [2024-07-22 23:24:58.136457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.136720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.136783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.137043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.137078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.137297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.137378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.137689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.137753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.138002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.138066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.138371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.138407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.138665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.138729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.139001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.139064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.139339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.139405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.139728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.139783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.140046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.140111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.140373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.140440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.140757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.140821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.141087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.141122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.141391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.141456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.141767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.141831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.142088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.142152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.142428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.142464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.142697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.142761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.142985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.143048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.143302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.143381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.143651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.143692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.143917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.143981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.144278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.144378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.144693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.144758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.144961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.144996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.145249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.145332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.145639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.145703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.145970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.146034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.146328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.146381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.146661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.146725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.146978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.147042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.147267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.147348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.147628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.147663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.147867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.147930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.148271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.148402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.148735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.877 [2024-07-22 23:24:58.148826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.877 qpair failed and we were unable to recover it. 00:44:21.877 [2024-07-22 23:24:58.149134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.878 [2024-07-22 23:24:58.149172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:21.878 qpair failed and we were unable to recover it. 00:44:21.878 [2024-07-22 23:24:58.149414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.149482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.149712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.149778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.150026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.150091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.150381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.150417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.150633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.150668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.150878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.150913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.151163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.151198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.151358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.151393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.151535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.151570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.151772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.151807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.152014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.152049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.152233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 [2024-07-22 23:24:58.152268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8740000b90 with addr=10.0.0.2, port=4420 00:44:22.161 qpair failed and we were unable to recover it. 00:44:22.161 [2024-07-22 23:24:58.152422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:22.161 [2024-07-22 23:24:58.152467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:22.161 [2024-07-22 23:24:58.152487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-07-22 23:24:58.152472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.161 only 00:44:22.161 [2024-07-22 23:24:58.152511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:22.162 [2024-07-22 23:24:58.152527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:22.162 [2024-07-22 23:24:58.152527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.152745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.152780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.152939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.152902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:44:22.162 [2024-07-22 23:24:58.152978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.152971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:44:22.162 [2024-07-22 23:24:58.153033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:44:22.162 [2024-07-22 23:24:58.153036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:44:22.162 [2024-07-22 23:24:58.153184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.153218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.153451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.153488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.153687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.153722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.153982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.154046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.154348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.154384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.154547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.154589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.154843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.154906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.155182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.155246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.155517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.155552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.155804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.156140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.156204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.156471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.156507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.156722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.156756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.156975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.157038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.157342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.157405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.157564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.157628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.157933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.157967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.158227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.158289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.158566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.158617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.158944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.159008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.159263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.159297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.159470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.159504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.159752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.159814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.160124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.160187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.160435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.160471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.160687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.160749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.161059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.161122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.161395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.161431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.161640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.161675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.161900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.161962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.162263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.162337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.162575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.162638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.162942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.162 [2024-07-22 23:24:58.162982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.162 qpair failed and we were unable to recover it. 00:44:22.162 [2024-07-22 23:24:58.163290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.163366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.163612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.163677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.163980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.164044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.164340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.164375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.164621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.164683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.164944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.165007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.165305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.165394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.165618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.165653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.165917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.165980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.166270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.166348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.166611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.166676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.166936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.166970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.167184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.167246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.167574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.167609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.167900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.167963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.168261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.168295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.168605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.168684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.168944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.169007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.169266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.169347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.169574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.169609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.169889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.169952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.170256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.170334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.170586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.170665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.170925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.170959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.171196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.171258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.171533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.171569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.171801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.171864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.172137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.172172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.172399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.172434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.172656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.172718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.172988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.173051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.173274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.173316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.173530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.173594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.173848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.173911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.174168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.174232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.174511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.174545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.174760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.174822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.175124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.163 [2024-07-22 23:24:58.175186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.163 qpair failed and we were unable to recover it. 00:44:22.163 [2024-07-22 23:24:58.175496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.175531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.175734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.175769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.176005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.176077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.176375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.176427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.176715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.176779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.177044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.177079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.177344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.177407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.177572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.177642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.177955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.178019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.178276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.178317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.178523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.178578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.178886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.178948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.179214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.179278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.179584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.179619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.179928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.179991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.180250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.180330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.180573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.180607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.180921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.180956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.181157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.181219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.181482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.181517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.181692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.181755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.182015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.182050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.182280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.182359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.182636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.182699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.182960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.183023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.183289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.183331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.183540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.183597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.183861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.183923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.184187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.184249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.184579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.184634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.184901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.184965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.185208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.185271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.185554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.185590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.185846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.185881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.186098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.186160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.186450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.186486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.186724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.186789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.187019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.187053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.164 [2024-07-22 23:24:58.187233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.164 [2024-07-22 23:24:58.187295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.164 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.187504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.187538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.187793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.187857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.188169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.188204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.188477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.188512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.188719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.188782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.189051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.189116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.189413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.189449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.189644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.189707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.189970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.190033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.190330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.190391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.190551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.190586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.190771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.190833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.191127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.191189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.191418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.191453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.191648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.191684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.191858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.191892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.192039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.192074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.192345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.192381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.192594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.192629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.192864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.192926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.193244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.193307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.193592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.193662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.193938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.193972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.194151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.194213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.194432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.194467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.194734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.194798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.195042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.195076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.195305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.195393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.195555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.195589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.195859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.195923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.196230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.196264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.196563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.196626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.196938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.197000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.197258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.197338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.197571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.197605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.197845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.197909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.198209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.198271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.165 [2024-07-22 23:24:58.198535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.165 [2024-07-22 23:24:58.198570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.165 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.198754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.198788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.198959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.199021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.199296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.199393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.199639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.199703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.199967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.200001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.200271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.200349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.200641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.200704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.201026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.201091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.201404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.201440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.201657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.201719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.201989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.202051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.202364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.202418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.202668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.202739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.202956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.203019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.203272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.203346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.203645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.203709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.203978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.204012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.204278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.204389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.204642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.204705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.204963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.205026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.205274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.205318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.205580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.205643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.205936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.205998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.206218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.206281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.206582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.206616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.206938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.207000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.207296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.207387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.207582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.207658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.207971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.208005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.208332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.166 [2024-07-22 23:24:58.208408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.166 qpair failed and we were unable to recover it. 00:44:22.166 [2024-07-22 23:24:58.208626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.208688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.208964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.209028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.209280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.209322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.209510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.209544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.209818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.209890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.210120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.210182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.210436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.210471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.210669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.210732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.210945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.211007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.211266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.211358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.211537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.211572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.211787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.211848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.212141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.212203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.212535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.212570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.212811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.212845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.213117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.213179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.213406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.213441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.213668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.213732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.214006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.214041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.214252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.214332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.214570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.214628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.214932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.214995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.215297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.215342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.215539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.215574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.215835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.215897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.216161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.216225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.216547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.216582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.216898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.216961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.217183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.217245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.217563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.217598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.217862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.217896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.218092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.218164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.218465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.218500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.218725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.218789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.219054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.219089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.219303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.219387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.219596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.219631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.219852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.167 [2024-07-22 23:24:58.219915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.167 qpair failed and we were unable to recover it. 00:44:22.167 [2024-07-22 23:24:58.220193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.220227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.220448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.220483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.220691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.220753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.221015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.221078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.221348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.221384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.221637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.221700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.221989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.222050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.222384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.222420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.222642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.222676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.222983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.223045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.223305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.223384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.223593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.223667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.223923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.223957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.224213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.224275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.224559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.224611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.224837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.224900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.225133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.225167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.225343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.225409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.225675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.225736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.226034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.226097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.226360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.226395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.226617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.226680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.226984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.227046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.227303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.227390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.227631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.227682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.227943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.228005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.228264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.228339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.228635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.228699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.228962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.228995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.229217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.229280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.229624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.229688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.229925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.229989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.230306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.230349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.230576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.230652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.230959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.231033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.231270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.231351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.168 [2024-07-22 23:24:58.231584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.168 [2024-07-22 23:24:58.231618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.168 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.231857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.231919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.232167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.232229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.232538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.232573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.232718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.232752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.232949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.233011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.233272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.233365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.233616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.233679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.233942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.233976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.234165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.234227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.234532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.234567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.234869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.234932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.235242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.235276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.235590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.235648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.235910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.235973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.236249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.236325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.236567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.236601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.236812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.236875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.237178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.237242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.237568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.237603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.237898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.237932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.238169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.238216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.238440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.238475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.238731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.238778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.239017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.239050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.239266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.239329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.239612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.239659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.239928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.239975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.240250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.240285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.240561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.240612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.240879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.240925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.241198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.241246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.241523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.241557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.241832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.241879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.242064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.242112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.242385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.242420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.242613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.242647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.242866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.242912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.169 [2024-07-22 23:24:58.243131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.169 [2024-07-22 23:24:58.243177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.169 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.243390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.243425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.243602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.243636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.243886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.243948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.244239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.244301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.244619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.244682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.244931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.244965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.245167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.245229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.245515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.245554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.245785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.245848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.246140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.246175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.246430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.246466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.246670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.246732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.246997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.247060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.247325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.247360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.247596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.247659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.247906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.247968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.248271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.248384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.248554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.248588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.248769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.248830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.249041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.249103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.249406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.249441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.249651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.249685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.249971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.250034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.250341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.250405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.250628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.250691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.251003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.251038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.251327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.251398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.251625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.251697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.252008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.252071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.252372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.252408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.252613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.252675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.252937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.252999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.253302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.253385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.170 [2024-07-22 23:24:58.253548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.170 [2024-07-22 23:24:58.253582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.170 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.253834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.253898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.254166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.254228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.254549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.254584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.254882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.254917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.255230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.255292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.255557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.255613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.255873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.255936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.256201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.256236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.256484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.256519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.256726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.256788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.257062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.257126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.257430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.257465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.257661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.257724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.258040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.258102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.258407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.258443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.258633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.258668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.258854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.258917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.259156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.259219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.259523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.259559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.259748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.259781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.260003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.260076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.260386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.260422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.260671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.260708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.260982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.261016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.261252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.261327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.261622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.261687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.261951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.262014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.262320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.262356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.262508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.262543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.262757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.262820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.263087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.263150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.263415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.263450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.263708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.263771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.264021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.264083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.264394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.264430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.171 qpair failed and we were unable to recover it. 00:44:22.171 [2024-07-22 23:24:58.264670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.171 [2024-07-22 23:24:58.264705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.264871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.264904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.265074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.265107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.265279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.265336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.265541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.265575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.265819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.265882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.266190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.266252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.266556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.266592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.266810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.266845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.267047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.267109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.267402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.267437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.267645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.267680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.267890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.267924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.268202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.268264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.268547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.268582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.268819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.268879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.269101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.269165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.269418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.269454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.269605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.269666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.269928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.269991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.270251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.270326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.270542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.270576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.270838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.270900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.271191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.271254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.271493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.271527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.271674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.271708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.271912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.271984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.272258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.272331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.272532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.272567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.272772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.272806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.273002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.273036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.273307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.273396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.273626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.273689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.273942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.273976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.274191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.274253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.274554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.274589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.274852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.274915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.275186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.275219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.172 qpair failed and we were unable to recover it. 00:44:22.172 [2024-07-22 23:24:58.275431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.172 [2024-07-22 23:24:58.275467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.275716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.275778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.276101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.276165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.276393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.276428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.276612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.276674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.276977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.277039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.277303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.277399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.277615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.277649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.277889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.277923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.278167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.278201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.278467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.278502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.278703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.278737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.279007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.279070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.279299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.279369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.279620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.279683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.279972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.280006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.280385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.280421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.280617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.280680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.280914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.280977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.281279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.281348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.281575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.281609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.281776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.281822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.282008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.282071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.282389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.282424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.282605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.282666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.282929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.282976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.283260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.283334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.283565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.283599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.283829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.283892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.284214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.284283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.284598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.284633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.284826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.284860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.285132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.285195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.285410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.285446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.285630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.285664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.285904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.285938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.286157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.286218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.286504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.286539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.173 [2024-07-22 23:24:58.286807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.173 [2024-07-22 23:24:58.286870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.173 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.287174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.287208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.287401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.287437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.287680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.287727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.288046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.288107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.288406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.288441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.288696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.288758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.288978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.289025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.289275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.289363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.289603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.289637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.289836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.289906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.290199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.290246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.290558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.290593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.290789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.290823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.291043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.291105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.291431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.291467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.291732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.291794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.292056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.292090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.292300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.292402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.292550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.292584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.292803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.292866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.293134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.293167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.293409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.293444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.293574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.293629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.293851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.293914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.294204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.294237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.294441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.294476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.294723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.294770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.295039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.295101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.295346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.295381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.295578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.295643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.295920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.295966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.296272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.296347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.296611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.296645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.296949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.297011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.297331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.297394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.297620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.297682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.297945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.297979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.298137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.174 [2024-07-22 23:24:58.298171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.174 qpair failed and we were unable to recover it. 00:44:22.174 [2024-07-22 23:24:58.298429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.298464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.298671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.298734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.298997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.299031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.299199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.299233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.299427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.299462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.299711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.299774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.300094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.300130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.300371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.300407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.300670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.300736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.300995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.301058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.301378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.301413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.301610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.301645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.301797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.301830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.302076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.302139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.302442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.302477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.302685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.302745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.303019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.303066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.303341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.303407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.303577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.303611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.303768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.303830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.304142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.304196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.304521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.304556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.304787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.304821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.305026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.305088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.305353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.305388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.305584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.305662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.305923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.305957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.306124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.306186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.306411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.306447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.306683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.306716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.306949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.306983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.307240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.307305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.175 [2024-07-22 23:24:58.307603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.175 [2024-07-22 23:24:58.307638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.175 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.307852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.307914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.308175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.308209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.308409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.308445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.308704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.308751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.308964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.309027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.309270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.309305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.309509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.309543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.309832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.309878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.310177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.310239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.310503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.310537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.310717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.310780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.311042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.311088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.311387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.311422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.311624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.311658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.311864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.311935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.312217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.312264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.312597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.312632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.312820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.312854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.313083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.313145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.313458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.313493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.313747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.313809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.314129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.314191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.314454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.314488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.314659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.314717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.314970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.315033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.315246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.315279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.315447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.315482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.315684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.315730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.315947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.316009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.316275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.316327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.316527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.316562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.316710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.316744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.317006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.317068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.317328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.317363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.317511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.317547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.317700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.317734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.317955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.318016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.318265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.176 [2024-07-22 23:24:58.318298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.176 qpair failed and we were unable to recover it. 00:44:22.176 [2024-07-22 23:24:58.318481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.318516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.318697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.318744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.318976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.319038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.319288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.319330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.319496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.319530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.319708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.319742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.320021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.320083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.320283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.320337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.320597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.320658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.320932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.320978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.321212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.321273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.321521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.321556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.321829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.321891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.322155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.322202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.322490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.322525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.322734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.322768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.322928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.322990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.323317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.323383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.323584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.323654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.323936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.323970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.324215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.324277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.324624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.324671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.324983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.325047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.325326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.325361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.325585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.325647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.325906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.325953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.326204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.326266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.326574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.326609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.326920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.326981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.327206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.327253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.327491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.327525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.327780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.327814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.328086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.328149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.328417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.328452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.328711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.328772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.329060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.329094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.329334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.329396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.329589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.177 [2024-07-22 23:24:58.329647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.177 qpair failed and we were unable to recover it. 00:44:22.177 [2024-07-22 23:24:58.329942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.330004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.330267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.330300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.330502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.330536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.330744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.330790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.330977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.331040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.331249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.331283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.331498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.331538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.331731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.331793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.332048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.332110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.332401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.332436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.332646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.332680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.332998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.333045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.333353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.333416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.333719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.333753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.334052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.334114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.334408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.334442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.334600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.334662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.334967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.335001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.335303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.335386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.335596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.335643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.335916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.335979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.336236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.336269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.336421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.336456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.336660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.336694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.336917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.336979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.337261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.337337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.337608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.337642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.337914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.337960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.338264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.338344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.338612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.338647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.338923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.338984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.339290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.339348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.339607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.339671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.339966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.340000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.340327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.340401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.340650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.341015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.341078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.341340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.341375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.178 qpair failed and we were unable to recover it. 00:44:22.178 [2024-07-22 23:24:58.341509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.178 [2024-07-22 23:24:58.341544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.341804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.341850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.342157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.342220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.342520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.342554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.342813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.342874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.343182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.343229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.343567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.343602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.343899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.343933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.344128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.344161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.344342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.344405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.344653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.344715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.345026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.345060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.345370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.345405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.345657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.345703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.346016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.346079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.346380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.346415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.346615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.346677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.346977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.347023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.347292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.347367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.347636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.347699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.347973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.348035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.348345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.348401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.348610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.348644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.348887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.348921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.349238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.349301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.349599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.349661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.349940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.350002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.350243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.350276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.350490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.350525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.350733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.350780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.351056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.351118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.351372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.351407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.351533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.351565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.351779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.351826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.352087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.352154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.352474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.352509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.352698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.352752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.353026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.353060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.179 qpair failed and we were unable to recover it. 00:44:22.179 [2024-07-22 23:24:58.353251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.179 [2024-07-22 23:24:58.353324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.353577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.353610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.353841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.353888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.354115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.354149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.354375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.354410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.354665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.354721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.355004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.355050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.355280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.355322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.355577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.355625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.355850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.355884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.356123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.356170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.356426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.356461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.356708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.356743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.356925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.356959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.357201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.357248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.357536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.357571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.357817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.357863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.358141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.358175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.358441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.358476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.358711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.358744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.359011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.359057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.359281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.359320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.359522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.359555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.359827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.359861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.360080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.360126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.360388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.360423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.360687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.360734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.360983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.361017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.361263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.361318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.361583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.361616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.361832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.361878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.362150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.362183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.362482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.180 [2024-07-22 23:24:58.362516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.180 qpair failed and we were unable to recover it. 00:44:22.180 [2024-07-22 23:24:58.362705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.362739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.362982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.363045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.363358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.363392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.363601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.363635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.363823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.363857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.364103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.364150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.364419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.364461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.364680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.364727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.364972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.365006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.365249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:22.181 [2024-07-22 23:24:58.365284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.365576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.365640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bbb0 with addr=10.0.0.2, port=4420 00:44:22.181 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.365915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.365993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:22.181 [2024-07-22 23:24:58.366287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.366366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.366608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.366675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.366923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.366974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.367211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.367279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.367580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.367652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.367943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.368010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.368338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.368380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.368623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.368676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.368963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.368998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.369238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.369288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.369506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.369555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.369778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.369870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.370155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.370205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.370471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.370522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.370783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.370833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.371152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.371208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.371456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.371492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.371634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.371691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.371959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.371994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.372232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.372303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.372558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.372617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.372900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.372967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.373280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.373398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.373620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.181 [2024-07-22 23:24:58.373690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.181 qpair failed and we were unable to recover it. 00:44:22.181 [2024-07-22 23:24:58.374001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.374039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.374290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.374336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.374493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.374529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.374783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.374832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.375104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.375139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.375333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.375391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.375546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.375581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.375746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.375794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.376014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.376057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.376287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.376357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.376531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.376579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.376789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.376837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.377055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.377090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.377335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.377402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.377525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.377562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.377780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.377829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.378069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.378105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.378327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.378390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.378549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.378584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.378789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.378840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.379032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.379067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.379218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.379266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.379484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.379519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.379767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.379815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.380107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.380142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.380417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.380453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.380614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.380649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.380836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.380885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.381128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.381164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.381396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.381432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.381609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.381644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.381792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.381841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.382132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.382168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.382360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.382409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.382593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.382634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.382865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.382914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.383185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.383220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.182 qpair failed and we were unable to recover it. 00:44:22.182 [2024-07-22 23:24:58.383433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.182 [2024-07-22 23:24:58.383483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.383713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.383748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.383949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.383998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.384231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.384265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.384430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.384479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.384723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.384759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.384970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.385018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.385273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.385323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.385529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.385578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.385821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.385856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.386070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.386118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.386370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.386412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.386574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.386623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.386907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.386941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.387205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.387254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.387462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.387498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.387761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.387809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.388051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.388085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.388300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.388377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.388509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.388545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.388706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.388754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.388991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.389026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.389237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.389284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.389477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.389512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.389698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.389746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.390033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.390068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.390328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.390395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.390561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.390597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.390708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.390762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.390978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.391014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.391177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.391224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.391401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.391437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.391549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.391610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.391806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.391841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.392008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.392057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.392222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.392257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.392439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.392488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.392733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.183 [2024-07-22 23:24:58.392769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.183 qpair failed and we were unable to recover it. 00:44:22.183 [2024-07-22 23:24:58.392990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.393039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.393281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.393326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.393520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.393595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.393871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.393907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.394082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.394145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.394367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.394411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.394595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.394643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.394947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.394981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.395221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.395270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.395432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.395467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.395639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.395687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.395918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.395964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.396172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.396220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.396447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.396488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.396706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.396755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.396996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.397031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.397225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.397273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.397476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.397511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.397736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.397784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.398062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.398096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.398287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.398353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.398558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.398593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.398783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.398831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.399105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.399140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.399401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.399451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.399734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.399768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.399980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.400028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.400288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.400345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.400507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.400542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.400822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.400856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.401089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 [2024-07-22 23:24:58.401136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.184 qpair failed and we were unable to recover it. 00:44:22.184 [2024-07-22 23:24:58.401407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.184 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:22.184 [2024-07-22 23:24:58.401443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.401651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:22.185 [2024-07-22 23:24:58.401700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.401927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.401964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:22.185 [2024-07-22 23:24:58.402218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.402269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:22.185 [2024-07-22 23:24:58.402450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.402485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.402706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.402755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.403057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.403091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.403368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.403418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.403609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.403644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.403843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.403891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.404172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.404206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.404460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.404509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.404762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.404797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.405043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.405090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.405273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.405315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.405467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.405515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.405801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.405836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.406105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.406153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.406415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.406450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.406631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.406679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.406923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.406963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.407179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.407227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.407409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.407445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.407637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.407686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.407927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.407961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.408223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.408271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.408510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.408545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.408789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.408837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.409016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.409051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.409294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.409373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.409509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.409543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.409713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.409760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.410034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.410069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.410299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.410383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.410558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.410593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.410851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.410898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.411112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.185 [2024-07-22 23:24:58.411147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.185 qpair failed and we were unable to recover it. 00:44:22.185 [2024-07-22 23:24:58.411347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.411397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.411578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.411613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.411769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.411817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.412016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.412051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.412204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.412252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.412472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.412508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.412717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.413046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.413081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.413363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.413412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.413675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.413710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.413979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.414028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.414282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.414324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.414542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.414591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.414871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.414905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.415141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.415188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.415460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.415496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.415792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.415856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.416055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.416089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.416303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.416367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.416558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.416593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.416838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.416885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.417162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.417196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.417425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.417460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.417640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.417692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.417918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.417965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.418138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.418172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.418409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.418458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.418642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.418677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.418917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.418965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.419234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.419268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.419479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.419529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.419821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.419856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.420102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.420149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.420349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.420385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.420535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.420583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.420774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.420809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.421049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.421096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.186 [2024-07-22 23:24:58.421384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.186 [2024-07-22 23:24:58.421420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.186 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.421682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.421730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.421993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.422027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.422245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.422292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.422479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.422515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.422721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.422769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.422980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.423015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.423258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.423306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.423538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.423573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.423783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.423832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.424096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.424141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.424406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.424455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.424736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.424771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.425076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.425143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.425450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.425486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.425689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.425736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.426019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.426054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.426368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.426443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.426752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.426786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.427060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.427108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.427354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.427412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.427624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.427671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.427913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.427948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.428136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.428184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.428449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.428484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.428743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.428791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.429079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.429120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.429409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.429458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.429737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.429772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.430021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.430070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.430341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.430399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.430661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.430725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.430983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.431018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.431258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.431305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.431597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.431656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.431895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.431942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.432138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.432173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.432336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.187 [2024-07-22 23:24:58.432385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.187 qpair failed and we were unable to recover it. 00:44:22.187 [2024-07-22 23:24:58.432612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.432647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.432830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.432877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.433130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.433166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.433397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.433446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.433733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.433768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.434028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.434076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.434316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.434352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.434538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.434586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.434859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.434894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.435165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.435213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.435472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.435508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.435718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.435766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.436013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.436048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.436334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.436386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 Malloc0 00:44:22.188 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:22.188 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:44:22.188 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:22.188 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:22.188 [2024-07-22 23:24:58.438895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.438977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.439281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.439347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.439614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.439650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.439909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.439957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.440200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.440235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.440492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.440543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.440819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.440854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.441090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.441139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.441379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.441416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.441675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.441738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.442032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.442067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.442338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.442388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.442578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.442613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.442867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.442915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.443197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.443232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.443527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.443576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.443858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.443893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.444171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.444247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.444413] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:22.188 [2024-07-22 23:24:58.444563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.444615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.444848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.444896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.445180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.445215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.445504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.188 [2024-07-22 23:24:58.445539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.188 qpair failed and we were unable to recover it. 00:44:22.188 [2024-07-22 23:24:58.445789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.445824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.446078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.446142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.446444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.446480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.446733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.446797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.447046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.447081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.447245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.447293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.447605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.447641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.447895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.447960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.448242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.448277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.448486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.448522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.448798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.448833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.449117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.449181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.449474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.449510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.189 [2024-07-22 23:24:58.449815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.189 [2024-07-22 23:24:58.449879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.189 qpair failed and we were unable to recover it. 00:44:22.455 [2024-07-22 23:24:58.450168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.455 [2024-07-22 23:24:58.450204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.455 qpair failed and we were unable to recover it. 00:44:22.455 [2024-07-22 23:24:58.450426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.455 [2024-07-22 23:24:58.450477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.455 qpair failed and we were unable to recover it. 00:44:22.455 [2024-07-22 23:24:58.450703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.455 [2024-07-22 23:24:58.450738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.455 qpair failed and we were unable to recover it. 00:44:22.455 [2024-07-22 23:24:58.450993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.455 [2024-07-22 23:24:58.451058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.455 qpair failed and we were unable to recover it. 00:44:22.455 [2024-07-22 23:24:58.451354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.455 [2024-07-22 23:24:58.451390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.455 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.451598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.451646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.451844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.451879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.452015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.452064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.452245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.452280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.452436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.452486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.452646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.452681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.452821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.452882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.453099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.453133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.453368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.453417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.453640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.453675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.453897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.453946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.454223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.454263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.454466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.454502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.454697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.454733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.454988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.455022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.455196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.455231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.455434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.455470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.455668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.455704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.455946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.455981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.456175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.456210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:22.456 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:22.456 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:22.456 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:22.456 [2024-07-22 23:24:58.457276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.457330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.457561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.457600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.457763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.457799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.458047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.458083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.458232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.458268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.458476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.458512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.458764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.458800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.458997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.459032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.459256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.459291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.459527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.459563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.459760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.459795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.460003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.460037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.460202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.460237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.460401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.460438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.460640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.460676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.456 qpair failed and we were unable to recover it. 00:44:22.456 [2024-07-22 23:24:58.460877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.456 [2024-07-22 23:24:58.460912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.461113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.461154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.461323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.461359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.461596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.461631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.461819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.461854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.462037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.462073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.462220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.462255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.462471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.462506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.462711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.462746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.462937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.462972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.463133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.463168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.463335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.463371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.463600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.463636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.463837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.463872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.464063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.464099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:22.457 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:22.457 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:22.457 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:22.457 [2024-07-22 23:24:58.465236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.465278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8738000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.465537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.465597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.465812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.465852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.466114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.466174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.466389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.466429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.466593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.466628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.466855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.466894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.467142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.467205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.467419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.467458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.467716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.467772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.468026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.468079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.468347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.468384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.468546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.468600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.468817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.468872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.469117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.469174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.469346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.469403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.469617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.469676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.469901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.469961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.470169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.470204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.470453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.470514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.470728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.470784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.457 qpair failed and we were unable to recover it. 00:44:22.457 [2024-07-22 23:24:58.471005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.457 [2024-07-22 23:24:58.471061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.471263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.471299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.471479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.471535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.471761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.471828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.472049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.472108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.472332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.472370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.472583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.472637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:22.458 [2024-07-22 23:24:58.472857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.472912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:22.458 [2024-07-22 23:24:58.473096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.473151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:22.458 [2024-07-22 23:24:58.473400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.473458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.473713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.473769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.474015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.474069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.474264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.474299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.474501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.474559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.474779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.474837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.475021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.475076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.475320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.475359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.475533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.475570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.475771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.475830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.476088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.476144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.476390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.458 [2024-07-22 23:24:58.476448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8748000b90 with addr=10.0.0.2, port=4420 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.476690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:22.458 [2024-07-22 23:24:58.485869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.458 [2024-07-22 23:24:58.486017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.458 [2024-07-22 23:24:58.486055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.458 [2024-07-22 23:24:58.486076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.458 [2024-07-22 23:24:58.486092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.458 [2024-07-22 23:24:58.486138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:22.458 23:24:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1070487 00:44:22.458 [2024-07-22 23:24:58.495682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.458 [2024-07-22 23:24:58.495805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.458 [2024-07-22 23:24:58.495848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.458 [2024-07-22 23:24:58.495869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.458 [2024-07-22 23:24:58.495885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.458 [2024-07-22 23:24:58.495925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.458 [2024-07-22 23:24:58.505749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.458 [2024-07-22 23:24:58.505872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.458 [2024-07-22 23:24:58.505908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.458 [2024-07-22 23:24:58.505927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.458 [2024-07-22 23:24:58.505943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.458 [2024-07-22 23:24:58.505983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.458 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.515727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.515854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.515888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.515907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.515922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.515964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.525720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.525854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.525890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.525910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.525926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.525966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.535756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.535905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.535942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.535963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.535993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.536046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.545781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.545920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.545956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.545976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.545992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.546038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.555851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.556054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.556090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.556110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.556127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.556169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.565825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.565958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.565994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.566013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.566029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.566070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.575806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.575929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.575965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.575984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.575999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.576040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.585823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.585975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.586017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.586037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.586053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.586093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.595908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.596033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.596068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.596087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.596103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.596143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.605937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.606066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.606102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.606121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.606138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.606178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.615961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.616121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.616156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.616176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.616192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.616232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.625977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.626119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.626154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.626173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.626197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.626239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.459 qpair failed and we were unable to recover it. 00:44:22.459 [2024-07-22 23:24:58.635984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.459 [2024-07-22 23:24:58.636108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.459 [2024-07-22 23:24:58.636140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.459 [2024-07-22 23:24:58.636158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.459 [2024-07-22 23:24:58.636175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.459 [2024-07-22 23:24:58.636215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.646037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.646154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.646189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.646208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.646224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.646265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.656062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.656180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.656216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.656235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.656251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.656291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.666121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.666252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.666287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.666306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.666336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.666377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.676167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.676301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.676345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.676364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.676381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.676421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.686165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.686338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.686375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.686394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.686410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.686450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.696178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.696331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.696366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.696386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.696401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.696441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.706229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.706382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.706418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.706437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.706453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.706494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.716253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.716393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.716427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.716445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.716468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.460 [2024-07-22 23:24:58.716510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.460 qpair failed and we were unable to recover it. 00:44:22.460 [2024-07-22 23:24:58.726268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.460 [2024-07-22 23:24:58.726401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.460 [2024-07-22 23:24:58.726440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.460 [2024-07-22 23:24:58.726459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.460 [2024-07-22 23:24:58.726476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.461 [2024-07-22 23:24:58.726517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.461 qpair failed and we were unable to recover it. 00:44:22.461 [2024-07-22 23:24:58.736300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.461 [2024-07-22 23:24:58.736448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.461 [2024-07-22 23:24:58.736482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.461 [2024-07-22 23:24:58.736501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.461 [2024-07-22 23:24:58.736517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.461 [2024-07-22 23:24:58.736559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.461 qpair failed and we were unable to recover it. 00:44:22.461 [2024-07-22 23:24:58.746343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.461 [2024-07-22 23:24:58.746531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.461 [2024-07-22 23:24:58.746566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.461 [2024-07-22 23:24:58.746584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.461 [2024-07-22 23:24:58.746600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.461 [2024-07-22 23:24:58.746640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.461 qpair failed and we were unable to recover it. 00:44:22.461 [2024-07-22 23:24:58.756403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.461 [2024-07-22 23:24:58.756531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.461 [2024-07-22 23:24:58.756565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.461 [2024-07-22 23:24:58.756584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.461 [2024-07-22 23:24:58.756600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.461 [2024-07-22 23:24:58.756641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.461 qpair failed and we were unable to recover it. 00:44:22.721 [2024-07-22 23:24:58.766414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.721 [2024-07-22 23:24:58.766550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.721 [2024-07-22 23:24:58.766584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.721 [2024-07-22 23:24:58.766603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.721 [2024-07-22 23:24:58.766619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.721 [2024-07-22 23:24:58.766659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.721 qpair failed and we were unable to recover it. 00:44:22.721 [2024-07-22 23:24:58.776475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.721 [2024-07-22 23:24:58.776625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.721 [2024-07-22 23:24:58.776660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.721 [2024-07-22 23:24:58.776680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.721 [2024-07-22 23:24:58.776697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.721 [2024-07-22 23:24:58.776738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.721 qpair failed and we were unable to recover it. 00:44:22.721 [2024-07-22 23:24:58.786557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.721 [2024-07-22 23:24:58.786687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.721 [2024-07-22 23:24:58.786722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.721 [2024-07-22 23:24:58.786742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.721 [2024-07-22 23:24:58.786758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.721 [2024-07-22 23:24:58.786798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.721 qpair failed and we were unable to recover it. 00:44:22.721 [2024-07-22 23:24:58.796434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.721 [2024-07-22 23:24:58.796558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.721 [2024-07-22 23:24:58.796592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.721 [2024-07-22 23:24:58.796611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.721 [2024-07-22 23:24:58.796627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.796666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.806468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.806594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.806629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.806655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.806673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.806713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.816540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.816659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.816693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.816711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.816727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.816767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.826505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.826625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.826660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.826679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.826695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.826735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.836574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.836707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.836741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.836760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.836777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.836816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.846636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.846759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.846793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.846812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.846828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.846868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.856621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.856739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.856772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.856790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.856806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.856846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.866599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.866717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.866752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.866772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.866787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.866827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.876699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.876848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.876883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.876902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.876917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.876957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.886678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.886794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.886833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.886852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.886868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.886908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.896716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.896831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.896872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.896893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.896909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.896949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.906742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.906858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.906892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.906910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.906927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.906967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.722 qpair failed and we were unable to recover it. 00:44:22.722 [2024-07-22 23:24:58.916799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.722 [2024-07-22 23:24:58.916923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.722 [2024-07-22 23:24:58.916958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.722 [2024-07-22 23:24:58.916977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.722 [2024-07-22 23:24:58.916992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.722 [2024-07-22 23:24:58.917033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.926806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.926958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.926993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.927012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.927028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.927068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.936825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.936938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.936977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.936996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.937013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.937060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.946853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.946967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.946999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.947017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.947034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.947074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.956911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.957075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.957111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.957130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.957147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.957187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.966967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.967082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.967122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.967141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.967157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.967197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.977025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.977153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.977187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.977206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.977222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.977263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.986992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.987113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.987156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.987177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.987193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.987234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:58.997022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:58.997146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:58.997180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:58.997199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:58.997215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:58.997255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:59.007072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:59.007214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:59.007248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:59.007267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:59.007283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:59.007334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:59.017223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:59.017357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:59.017391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:59.017409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:59.017426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:59.017465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.723 [2024-07-22 23:24:59.027134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.723 [2024-07-22 23:24:59.027301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.723 [2024-07-22 23:24:59.027349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.723 [2024-07-22 23:24:59.027369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.723 [2024-07-22 23:24:59.027386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.723 [2024-07-22 23:24:59.027436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.723 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.037137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.037263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.037298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.037328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.037346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.037386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.047173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.047318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.047353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.047372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.047388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.047428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.057169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.057285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.057325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.057345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.057362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.057402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.067214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.067374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.067410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.067429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.067445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.067486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.077220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.077357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.077392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.077411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.077427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.077468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.087325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.087457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.087492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.087511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.087528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.087568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.097270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.097413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.097449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.097468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.097484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.097524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.107303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.107427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.107461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.107480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.107496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.107536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.117353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.117480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.117515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.117534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.117558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.117598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.127427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.127545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.985 [2024-07-22 23:24:59.127583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.985 [2024-07-22 23:24:59.127601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.985 [2024-07-22 23:24:59.127617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.985 [2024-07-22 23:24:59.127657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.985 qpair failed and we were unable to recover it. 00:44:22.985 [2024-07-22 23:24:59.137418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.985 [2024-07-22 23:24:59.137536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.137570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.137589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.137606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.137646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.147487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.147616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.147651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.147671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.147686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.147726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.157531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.157709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.157744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.157762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.157778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.157818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.167623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.167765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.167800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.167819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.167836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.167876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.177623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.177747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.177781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.177800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.177816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.177856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.187609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.187724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.187758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.187777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.187793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.187831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.197657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.197786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.197820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.197840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.197856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.197895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.207628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.207751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.207786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.207812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.207828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.207868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.217671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.217804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.217839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.217858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.217874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.217914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.227675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.227820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.227855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.227874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.227890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.227930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.237734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.237851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.237884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.237901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.237918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.237965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.247744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.247867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.247901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.247919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.247935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.247974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.257799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.257949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.257984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.986 [2024-07-22 23:24:59.258003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.986 [2024-07-22 23:24:59.258018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.986 [2024-07-22 23:24:59.258059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.986 qpair failed and we were unable to recover it. 00:44:22.986 [2024-07-22 23:24:59.267800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.986 [2024-07-22 23:24:59.267921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.986 [2024-07-22 23:24:59.267953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.987 [2024-07-22 23:24:59.267971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.987 [2024-07-22 23:24:59.267988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.987 [2024-07-22 23:24:59.268026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.987 qpair failed and we were unable to recover it. 00:44:22.987 [2024-07-22 23:24:59.277903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.987 [2024-07-22 23:24:59.278036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.987 [2024-07-22 23:24:59.278071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.987 [2024-07-22 23:24:59.278090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.987 [2024-07-22 23:24:59.278106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.987 [2024-07-22 23:24:59.278147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.987 qpair failed and we were unable to recover it. 00:44:22.987 [2024-07-22 23:24:59.287874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:22.987 [2024-07-22 23:24:59.287998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:22.987 [2024-07-22 23:24:59.288034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:22.987 [2024-07-22 23:24:59.288054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:22.987 [2024-07-22 23:24:59.288070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:22.987 [2024-07-22 23:24:59.288109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:22.987 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.297933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.298052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.298087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.298114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.298131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.298170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.307933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.308075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.308111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.308130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.308146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.308186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.318039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.318175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.318210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.318229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.318245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.318285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.327969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.328088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.328122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.328141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.328156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.328196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.338060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.338218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.338252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.338271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.338287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.338335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.348074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.348185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.348225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.348244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.348260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.348299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.358067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.358220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.358254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.358273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.358289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.358337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.368101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.368223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.368259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.368277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.368293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.368342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.378142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.378290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.378334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.378354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.378371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.378411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.388172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.388294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.388346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.388367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.388383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.388424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.398229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.398386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.398422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.398441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.398457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.398498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.408218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.249 [2024-07-22 23:24:59.408343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.249 [2024-07-22 23:24:59.408376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.249 [2024-07-22 23:24:59.408395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.249 [2024-07-22 23:24:59.408411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.249 [2024-07-22 23:24:59.408452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.249 qpair failed and we were unable to recover it. 00:44:23.249 [2024-07-22 23:24:59.418261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.418387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.418423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.418442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.418458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.418499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.428323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.428455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.428491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.428511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.428527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.428575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.438431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.438619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.438654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.438673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.438689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.438730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.448367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.448482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.448520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.448539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.448556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.448596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.458390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.458545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.458580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.458600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.458616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.458656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.468432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.468551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.468585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.468604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.468621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.468660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.478469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.478595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.478637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.478657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.478674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.478714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.488512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.488626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.488659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.488677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.488695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.488736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.498590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.498770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.498804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.498823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.498839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.498879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.508538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.508647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.508682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.508701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.508716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.508757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.518560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.518689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.518724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.518742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.518766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.518807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.528666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.528831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.250 [2024-07-22 23:24:59.528867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.250 [2024-07-22 23:24:59.528886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.250 [2024-07-22 23:24:59.528902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.250 [2024-07-22 23:24:59.528942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.250 qpair failed and we were unable to recover it. 00:44:23.250 [2024-07-22 23:24:59.538655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.250 [2024-07-22 23:24:59.538777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.251 [2024-07-22 23:24:59.538812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.251 [2024-07-22 23:24:59.538831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.251 [2024-07-22 23:24:59.538847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.251 [2024-07-22 23:24:59.538887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.251 qpair failed and we were unable to recover it. 00:44:23.251 [2024-07-22 23:24:59.548670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.251 [2024-07-22 23:24:59.548786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.251 [2024-07-22 23:24:59.548825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.251 [2024-07-22 23:24:59.548845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.251 [2024-07-22 23:24:59.548861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.251 [2024-07-22 23:24:59.548900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.251 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.558713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.558841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.558877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.558897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.558913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.558958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.568699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.568869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.568905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.568924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.568942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.568982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.578747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.578862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.578896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.578915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.578931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.578971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.588887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.589048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.589083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.589103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.589119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.589159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.598902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.599025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.599060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.599079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.599095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.599136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.608835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.608956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.608990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.609016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.609033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.609075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.618886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.619016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.619050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.619069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.619085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.619126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.628873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.628986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.629021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.629040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.629056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.629096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.638964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.639086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.639122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.639141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.639157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.639198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.648971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.649125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.649161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.649180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.649196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.649236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.658990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.659109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.659144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.659163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.659180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.513 [2024-07-22 23:24:59.659220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.513 qpair failed and we were unable to recover it. 00:44:23.513 [2024-07-22 23:24:59.668988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.513 [2024-07-22 23:24:59.669105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.513 [2024-07-22 23:24:59.669139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.513 [2024-07-22 23:24:59.669157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.513 [2024-07-22 23:24:59.669173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.669213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.679092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.679286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.679333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.679354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.679370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.679412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.689090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.689211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.689244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.689262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.689279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.689326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.699117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.699237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.699272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.699298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.699326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.699367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.709175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.709354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.709389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.709409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.709425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.709464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.719208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.719349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.719384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.719404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.719419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.719460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.729191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.729345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.729380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.729399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.729416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.729457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.739217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.739345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.739381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.739400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.739417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.739458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.749240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.749390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.749426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.749445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.749461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.749501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.759330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.759476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.759510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.759528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.759544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.759585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.769367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.769489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.769522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.769540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.769556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.769596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.779327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.779477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.779512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.779531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.779547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.779588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.789388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.789505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.789550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.789571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.789587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.789628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.514 [2024-07-22 23:24:59.799417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.514 [2024-07-22 23:24:59.799550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.514 [2024-07-22 23:24:59.799585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.514 [2024-07-22 23:24:59.799604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.514 [2024-07-22 23:24:59.799620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.514 [2024-07-22 23:24:59.799660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.514 qpair failed and we were unable to recover it. 00:44:23.515 [2024-07-22 23:24:59.809396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.515 [2024-07-22 23:24:59.809518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.515 [2024-07-22 23:24:59.809552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.515 [2024-07-22 23:24:59.809571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.515 [2024-07-22 23:24:59.809587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.515 [2024-07-22 23:24:59.809627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.515 qpair failed and we were unable to recover it. 00:44:23.515 [2024-07-22 23:24:59.819460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.515 [2024-07-22 23:24:59.819579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.515 [2024-07-22 23:24:59.819614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.515 [2024-07-22 23:24:59.819633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.515 [2024-07-22 23:24:59.819650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.515 [2024-07-22 23:24:59.819690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.515 qpair failed and we were unable to recover it. 00:44:23.791 [2024-07-22 23:24:59.829458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.791 [2024-07-22 23:24:59.829570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.791 [2024-07-22 23:24:59.829604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.791 [2024-07-22 23:24:59.829622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.791 [2024-07-22 23:24:59.829639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.791 [2024-07-22 23:24:59.829687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.791 qpair failed and we were unable to recover it. 00:44:23.791 [2024-07-22 23:24:59.839514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.791 [2024-07-22 23:24:59.839640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.791 [2024-07-22 23:24:59.839675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.791 [2024-07-22 23:24:59.839694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.791 [2024-07-22 23:24:59.839710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.791 [2024-07-22 23:24:59.839751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.791 qpair failed and we were unable to recover it. 00:44:23.791 [2024-07-22 23:24:59.849541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.849661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.849697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.849716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.849732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.849772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.859599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.859727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.859760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.859779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.859795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.859835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.869630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.869749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.869782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.869801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.869817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.869858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.879682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.879806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.879848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.879869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.879885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.879925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.889695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.889826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.889860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.889879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.889895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.889935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.899701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.899839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.899874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.899893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.899908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.899949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.909724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.909855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.909889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.909908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.909924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.909965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.919813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.919932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.919966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.919985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.920008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.920050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.929801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.929917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.929951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.929969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.929985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.930026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.939807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.939920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.939955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.939974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.939990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.940031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.949835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.949950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.949985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.950004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.950020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.950059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.959877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.959999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.960034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.960053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.960069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.960109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.969886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.970030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.970065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.792 [2024-07-22 23:24:59.970083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.792 [2024-07-22 23:24:59.970099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.792 [2024-07-22 23:24:59.970139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.792 qpair failed and we were unable to recover it. 00:44:23.792 [2024-07-22 23:24:59.979888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.792 [2024-07-22 23:24:59.980044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.792 [2024-07-22 23:24:59.980079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:24:59.980098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:24:59.980114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:24:59.980154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:24:59.989990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:24:59.990108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:24:59.990142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:24:59.990161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:24:59.990177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:24:59.990216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:24:59.999980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.000130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.000165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.000184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.000200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.000240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.010013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.010180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.010214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.010232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.010256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.010298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.020103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.020240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.020274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.020293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.020317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.020361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.030110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.030231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.030267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.030288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.030306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.030360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.040133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.040301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.040346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.040366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.040384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.040427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.050100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.050235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.050269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.050288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.050304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.050354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.060146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.060289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.060334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.060355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.060373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.060414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.070200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.070329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.070363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.070382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.070398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.070440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.080239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.080394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.080428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.080446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.080462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.080503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.090232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.090401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.090435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.090454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.090471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.090512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:23.793 [2024-07-22 23:25:00.100265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:23.793 [2024-07-22 23:25:00.100408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:23.793 [2024-07-22 23:25:00.100446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:23.793 [2024-07-22 23:25:00.100474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:23.793 [2024-07-22 23:25:00.100492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:23.793 [2024-07-22 23:25:00.100534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:23.793 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.110269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.110424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.110459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.110479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.110495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.110537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.120366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.120534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.120567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.120586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.120603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.120644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.130451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.130570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.130604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.130623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.130640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.130680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.140353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.140469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.140501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.140521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.140537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.140578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.150392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.150541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.150575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.150593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.150610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.150650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.160457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.160601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.160634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.160652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.160669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.160710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.170476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.170605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.170638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.170657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.170673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.170714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.180506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.180635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.180667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.180685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.180702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.180745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.190488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.190609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.190649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.190669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.190686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.190726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.200561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.200688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.200721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.200740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.200757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.200798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.210604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.210747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.210781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.210800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.210817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.210857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.220560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.220735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.220769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.055 [2024-07-22 23:25:00.220788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.055 [2024-07-22 23:25:00.220806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.055 [2024-07-22 23:25:00.220847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.055 qpair failed and we were unable to recover it. 00:44:24.055 [2024-07-22 23:25:00.230632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.055 [2024-07-22 23:25:00.230795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.055 [2024-07-22 23:25:00.230832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.230852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.230869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.230916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.240661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.240795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.240830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.240849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.240866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.240907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.250650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.250766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.250801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.250821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.250838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.250878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.260697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.260815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.260848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.260867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.260884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.260924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.270743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.270883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.270917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.270936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.270952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.270993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.280761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.280893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.280935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.280955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.280972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.281013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.290805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.290945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.290979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.290998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.291014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.291055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.300813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.300964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.300998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.301017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.301033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.301074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.310894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.311048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.311082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.311101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.311119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.311159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.320924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.321054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.321088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.321108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.321132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.321175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.330969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.331090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.331123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.331142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.331159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.331201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.340921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.341041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.341077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.341097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.341113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.341154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.350982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.351122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.351159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.351179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.056 [2024-07-22 23:25:00.351196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.056 [2024-07-22 23:25:00.351236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.056 qpair failed and we were unable to recover it. 00:44:24.056 [2024-07-22 23:25:00.361023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.056 [2024-07-22 23:25:00.361147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.056 [2024-07-22 23:25:00.361183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.056 [2024-07-22 23:25:00.361204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.057 [2024-07-22 23:25:00.361221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.057 [2024-07-22 23:25:00.361261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.057 qpair failed and we were unable to recover it. 00:44:24.317 [2024-07-22 23:25:00.371009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.317 [2024-07-22 23:25:00.371136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.317 [2024-07-22 23:25:00.371170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.317 [2024-07-22 23:25:00.371188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.317 [2024-07-22 23:25:00.371205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.317 [2024-07-22 23:25:00.371245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.317 qpair failed and we were unable to recover it. 00:44:24.317 [2024-07-22 23:25:00.381058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.317 [2024-07-22 23:25:00.381179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.317 [2024-07-22 23:25:00.381212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.317 [2024-07-22 23:25:00.381231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.317 [2024-07-22 23:25:00.381248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.317 [2024-07-22 23:25:00.381288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.317 qpair failed and we were unable to recover it. 00:44:24.317 [2024-07-22 23:25:00.391061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.391179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.391213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.391232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.391248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.391288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.401130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.401294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.401338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.401371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.401388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.401429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.411112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.411227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.411262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.411282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.411305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.411361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.421245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.421422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.421459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.421481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.421498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.421539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.431221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.431346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.431379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.431398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.431415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.431455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.441270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.441402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.441436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.441455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.441472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.441512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.451237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.451382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.451416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.451435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.451451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.451490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.461290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.461418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.461451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.461470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.461487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.461527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.471333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.471453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.471486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.471505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.471522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.471562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.481380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.481531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.481564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.481583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.481600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.481642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.491411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.491536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.491570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.491590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.491608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.491648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.501440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.501564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.501597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.501624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.501641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.501683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.511453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.318 [2024-07-22 23:25:00.511587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.318 [2024-07-22 23:25:00.511622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.318 [2024-07-22 23:25:00.511641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.318 [2024-07-22 23:25:00.511657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.318 [2024-07-22 23:25:00.511697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.318 qpair failed and we were unable to recover it. 00:44:24.318 [2024-07-22 23:25:00.521517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.521660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.521693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.521713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.521730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.521771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.531476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.531601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.531636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.531656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.531673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.531712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.541536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.541659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.541695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.541714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.541733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.541774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.551558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.551680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.551714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.551734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.551751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.551791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.561600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.561783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.561817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.561836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.561854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.561895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.571606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.571726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.571760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.571779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.571795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.571835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.581624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.581754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.581788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.581807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.581824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.581865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.591707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.591857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.591898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.591918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.591935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.591976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.601691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.601827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.601861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.601880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.601898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.601939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.611726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.611864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.611899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.611918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.611935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.611975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.319 [2024-07-22 23:25:00.621763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.319 [2024-07-22 23:25:00.621897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.319 [2024-07-22 23:25:00.621931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.319 [2024-07-22 23:25:00.621949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.319 [2024-07-22 23:25:00.621966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.319 [2024-07-22 23:25:00.622007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.319 qpair failed and we were unable to recover it. 00:44:24.581 [2024-07-22 23:25:00.631774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.581 [2024-07-22 23:25:00.631929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.581 [2024-07-22 23:25:00.631964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.581 [2024-07-22 23:25:00.631984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.581 [2024-07-22 23:25:00.632001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.581 [2024-07-22 23:25:00.632050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.581 qpair failed and we were unable to recover it. 00:44:24.581 [2024-07-22 23:25:00.641819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.581 [2024-07-22 23:25:00.641962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.581 [2024-07-22 23:25:00.641997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.581 [2024-07-22 23:25:00.642016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.581 [2024-07-22 23:25:00.642032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.581 [2024-07-22 23:25:00.642073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.581 qpair failed and we were unable to recover it. 00:44:24.581 [2024-07-22 23:25:00.651838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.581 [2024-07-22 23:25:00.651959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.581 [2024-07-22 23:25:00.651992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.581 [2024-07-22 23:25:00.652011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.581 [2024-07-22 23:25:00.652028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.581 [2024-07-22 23:25:00.652068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.581 qpair failed and we were unable to recover it. 00:44:24.581 [2024-07-22 23:25:00.661890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.581 [2024-07-22 23:25:00.662009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.581 [2024-07-22 23:25:00.662042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.581 [2024-07-22 23:25:00.662061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.581 [2024-07-22 23:25:00.662078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.581 [2024-07-22 23:25:00.662118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.581 qpair failed and we were unable to recover it. 00:44:24.581 [2024-07-22 23:25:00.671873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.581 [2024-07-22 23:25:00.671994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.581 [2024-07-22 23:25:00.672027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.581 [2024-07-22 23:25:00.672045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.581 [2024-07-22 23:25:00.672062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.581 [2024-07-22 23:25:00.672102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.581 qpair failed and we were unable to recover it. 00:44:24.581 [2024-07-22 23:25:00.681959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.581 [2024-07-22 23:25:00.682122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.581 [2024-07-22 23:25:00.682163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.581 [2024-07-22 23:25:00.682183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.581 [2024-07-22 23:25:00.682200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.581 [2024-07-22 23:25:00.682242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.691997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.692125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.692159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.692179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.692196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.692237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.702009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.702137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.702171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.702190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.702208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.702252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.711984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.712104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.712138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.712159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.712176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.712217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.722066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.722196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.722231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.722250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.722267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.722324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.732037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.732156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.732190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.732209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.732226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.732268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.742138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.742261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.742295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.742326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.742346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.742387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.752146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.752280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.752326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.752349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.752367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.752409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.762212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.762344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.762379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.762398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.762415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.762457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.772169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.772327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.772362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.772381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.772401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.772441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.782264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.782398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.782433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.782453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.782470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.782511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.792261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.792413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.792447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.792467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.792485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.792527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.802267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.802415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.802450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.802469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.802486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.802527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.812349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.582 [2024-07-22 23:25:00.812494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.582 [2024-07-22 23:25:00.812529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.582 [2024-07-22 23:25:00.812549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.582 [2024-07-22 23:25:00.812573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.582 [2024-07-22 23:25:00.812616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.582 qpair failed and we were unable to recover it. 00:44:24.582 [2024-07-22 23:25:00.822332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.583 [2024-07-22 23:25:00.822452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.583 [2024-07-22 23:25:00.822486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.583 [2024-07-22 23:25:00.822507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.583 [2024-07-22 23:25:00.822524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.583 [2024-07-22 23:25:00.822564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.583 qpair failed and we were unable to recover it. 00:44:24.583 [2024-07-22 23:25:00.832343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.583 [2024-07-22 23:25:00.832500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.583 [2024-07-22 23:25:00.832534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.583 [2024-07-22 23:25:00.832553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.583 [2024-07-22 23:25:00.832571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.583 [2024-07-22 23:25:00.832612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.583 qpair failed and we were unable to recover it. 00:44:24.583 [2024-07-22 23:25:00.842392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.583 [2024-07-22 23:25:00.842528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.583 [2024-07-22 23:25:00.842563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.583 [2024-07-22 23:25:00.842582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.583 [2024-07-22 23:25:00.842600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.583 [2024-07-22 23:25:00.842640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.583 qpair failed and we were unable to recover it. 00:44:24.583 [2024-07-22 23:25:00.852433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.583 [2024-07-22 23:25:00.852587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.583 [2024-07-22 23:25:00.852622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.583 [2024-07-22 23:25:00.852641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.583 [2024-07-22 23:25:00.852658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.583 [2024-07-22 23:25:00.852699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.583 qpair failed and we were unable to recover it. 00:44:24.583 [2024-07-22 23:25:00.862507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.583 [2024-07-22 23:25:00.862631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.583 [2024-07-22 23:25:00.862666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.583 [2024-07-22 23:25:00.862685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.583 [2024-07-22 23:25:00.862702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.583 [2024-07-22 23:25:00.862743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.583 qpair failed and we were unable to recover it. 00:44:24.583 [2024-07-22 23:25:00.872495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.583 [2024-07-22 23:25:00.872648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.583 [2024-07-22 23:25:00.872682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.583 [2024-07-22 23:25:00.872701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.583 [2024-07-22 23:25:00.872719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.583 [2024-07-22 23:25:00.872760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.583 qpair failed and we were unable to recover it. 00:44:24.583 [2024-07-22 23:25:00.882551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.583 [2024-07-22 23:25:00.882680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.583 [2024-07-22 23:25:00.882715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.583 [2024-07-22 23:25:00.882734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.583 [2024-07-22 23:25:00.882751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.583 [2024-07-22 23:25:00.882792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.583 qpair failed and we were unable to recover it. 00:44:24.844 [2024-07-22 23:25:00.892574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.844 [2024-07-22 23:25:00.892721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.844 [2024-07-22 23:25:00.892755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.844 [2024-07-22 23:25:00.892776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.844 [2024-07-22 23:25:00.892794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.844 [2024-07-22 23:25:00.892835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.844 qpair failed and we were unable to recover it. 00:44:24.844 [2024-07-22 23:25:00.902612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.844 [2024-07-22 23:25:00.902741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.844 [2024-07-22 23:25:00.902775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.844 [2024-07-22 23:25:00.902802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.844 [2024-07-22 23:25:00.902820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.844 [2024-07-22 23:25:00.902861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.844 qpair failed and we were unable to recover it. 00:44:24.844 [2024-07-22 23:25:00.912635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.844 [2024-07-22 23:25:00.912759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.844 [2024-07-22 23:25:00.912793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.844 [2024-07-22 23:25:00.912812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.844 [2024-07-22 23:25:00.912829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.912869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.922674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.922796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.922831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.922850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.922868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.922909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.932664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.932781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.932815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.932834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.932852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.932893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.942699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.942854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.942889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.942908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.942925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.942966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.952721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.952872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.952906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.952925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.952941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.952982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.962776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.962938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.962972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.962990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.963008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.963049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.972796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.972926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.972961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.972980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.972997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.973038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.982822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.982955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.982989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.983009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.983025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.983066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:00.992835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:00.992958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:00.992992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:00.993019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:00.993037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:00.993077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:01.002885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:01.003061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:01.003095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:01.003114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:01.003132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:01.003172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:01.012938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:01.013052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:01.013087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:01.013107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:01.013124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:01.013165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:01.022948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:01.023064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:01.023098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:01.023118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:01.023135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:01.023176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:01.033008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:01.033148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:01.033183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:01.033203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:01.033220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:01.033262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.845 [2024-07-22 23:25:01.043087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.845 [2024-07-22 23:25:01.043212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.845 [2024-07-22 23:25:01.043246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.845 [2024-07-22 23:25:01.043265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.845 [2024-07-22 23:25:01.043282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.845 [2024-07-22 23:25:01.043332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.845 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.053028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.053172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.053207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.053228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.053244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.053286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.063068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.063212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.063246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.063265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.063282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.063333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.073080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.073229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.073263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.073282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.073300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.073351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.083138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.083274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.083323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.083346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.083364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.083406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.093189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.093384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.093419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.093439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.093457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.093498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.103146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.103266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.103301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.103331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.103349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.103390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.113211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.113341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.113376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.113395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.113412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.113453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.123266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.123408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.123443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.123462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.123478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.123526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.133274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.133410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.133445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.133464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.133481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.133522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.143327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.143480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.143514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.143533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.143550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.143590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:24.846 [2024-07-22 23:25:01.153435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:24.846 [2024-07-22 23:25:01.153555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:24.846 [2024-07-22 23:25:01.153593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:24.846 [2024-07-22 23:25:01.153613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:24.846 [2024-07-22 23:25:01.153630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:24.846 [2024-07-22 23:25:01.153671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:24.846 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.163410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.163537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.163571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.163591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.163608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.163650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.173558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.173700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.173742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.173764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.173781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.173822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.183469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.183593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.183628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.183648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.183665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.183705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.193537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.193684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.193719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.193738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.193754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.193795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.203578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.203704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.203738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.203758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.203775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.203815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.213493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.213649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.213683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.213702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.213726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.213769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.223598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.223714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.223749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.223768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.223785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.223826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.233542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.233674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.233708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.233727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.233744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.233786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.243593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.243732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.243767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.243787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.243805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.243846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.253637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.253753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.253787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.253807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.253824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.253864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.263657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.263792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.108 [2024-07-22 23:25:01.263826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.108 [2024-07-22 23:25:01.263845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.108 [2024-07-22 23:25:01.263862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.108 [2024-07-22 23:25:01.263902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.108 qpair failed and we were unable to recover it. 00:44:25.108 [2024-07-22 23:25:01.273669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.108 [2024-07-22 23:25:01.273816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.273850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.273870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.273887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.273927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.283709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.283846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.283882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.283902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.283918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.283959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.293759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.293888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.293923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.293942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.293959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.294001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.303792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.303913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.303947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.303973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.303991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.304032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.313833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.313955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.313989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.314009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.314026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.314066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.323853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.323980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.324013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.324032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.324050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.324090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.333849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.334008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.334042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.334061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.334078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.334119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.343915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.344029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.344064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.344084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.344101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.344142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.353938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.354064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.354099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.354119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.354136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.354177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.364032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.364194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.364228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.364247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.364264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.364305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.374047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.374179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.374214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.374233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.374250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.374290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.384023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.384143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.384177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.384196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.384213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.384253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.394059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.394176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.394210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.394236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.109 [2024-07-22 23:25:01.394254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.109 [2024-07-22 23:25:01.394294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.109 qpair failed and we were unable to recover it. 00:44:25.109 [2024-07-22 23:25:01.404112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.109 [2024-07-22 23:25:01.404241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.109 [2024-07-22 23:25:01.404275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.109 [2024-07-22 23:25:01.404294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.110 [2024-07-22 23:25:01.404321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.110 [2024-07-22 23:25:01.404365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.110 qpair failed and we were unable to recover it. 00:44:25.110 [2024-07-22 23:25:01.414117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.110 [2024-07-22 23:25:01.414240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.110 [2024-07-22 23:25:01.414274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.110 [2024-07-22 23:25:01.414294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.110 [2024-07-22 23:25:01.414330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.110 [2024-07-22 23:25:01.414375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.110 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.424141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.424278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.424322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.424344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.424362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.424403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.434164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.434275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.434317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.434339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.434364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.434404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.444193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.444329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.444366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.444385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.444402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.444444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.454194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.454330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.454366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.454386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.454403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.454444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.464259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.464409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.464444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.464463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.464480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.464521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.474334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.474461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.474497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.474517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.474534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.474574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.484328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.484469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.484510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.484531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.484548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.484589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.494290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.494419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.494456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.494476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.494492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.494532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.504356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.504479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.504515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.504535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.504552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.504598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.514449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.514599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.514635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.514655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.514672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.514712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.524468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.524594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.524627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.524646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.524663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.524712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.534470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.534599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.534636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.534656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.534673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.534715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.372 [2024-07-22 23:25:01.544500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.372 [2024-07-22 23:25:01.544621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.372 [2024-07-22 23:25:01.544658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.372 [2024-07-22 23:25:01.544678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.372 [2024-07-22 23:25:01.544695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.372 [2024-07-22 23:25:01.544735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.372 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.554492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.554621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.554657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.554676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.554694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.554735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.564531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.564651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.564685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.564705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.564722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.564762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.574586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.574724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.574765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.574786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.574804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.574845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.584584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.584709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.584745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.584766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.584782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.584823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.594619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.594775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.594811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.594831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.594848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.594888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.604666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.604796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.604830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.604850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.604867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.604909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.614689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.614810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.614847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.614868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.614892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.614934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.624745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.624927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.624961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.624980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.624998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.625039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.634816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.634935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.634971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.634991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.635007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.635048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.644836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.645000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.645034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.645054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.645071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.645112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.654819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.654940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.654976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.655000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.655018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.655059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.664906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.665049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.665084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.665104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.665122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.665164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.373 [2024-07-22 23:25:01.674906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.373 [2024-07-22 23:25:01.675038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.373 [2024-07-22 23:25:01.675072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.373 [2024-07-22 23:25:01.675092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.373 [2024-07-22 23:25:01.675109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.373 [2024-07-22 23:25:01.675150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.373 qpair failed and we were unable to recover it. 00:44:25.635 [2024-07-22 23:25:01.684929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.635 [2024-07-22 23:25:01.685057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.635 [2024-07-22 23:25:01.685092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.635 [2024-07-22 23:25:01.685113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.635 [2024-07-22 23:25:01.685130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.635 [2024-07-22 23:25:01.685171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.635 qpair failed and we were unable to recover it. 00:44:25.635 [2024-07-22 23:25:01.694957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.635 [2024-07-22 23:25:01.695123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.635 [2024-07-22 23:25:01.695157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.635 [2024-07-22 23:25:01.695177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.635 [2024-07-22 23:25:01.695195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.635 [2024-07-22 23:25:01.695236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.635 qpair failed and we were unable to recover it. 00:44:25.635 [2024-07-22 23:25:01.704961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.635 [2024-07-22 23:25:01.705106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.635 [2024-07-22 23:25:01.705141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.635 [2024-07-22 23:25:01.705161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.635 [2024-07-22 23:25:01.705185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.635 [2024-07-22 23:25:01.705228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.635 qpair failed and we were unable to recover it. 00:44:25.635 [2024-07-22 23:25:01.714999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.635 [2024-07-22 23:25:01.715114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.635 [2024-07-22 23:25:01.715149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.635 [2024-07-22 23:25:01.715169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.635 [2024-07-22 23:25:01.715186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.635 [2024-07-22 23:25:01.715227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.635 qpair failed and we were unable to recover it. 00:44:25.635 [2024-07-22 23:25:01.725031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.635 [2024-07-22 23:25:01.725154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.635 [2024-07-22 23:25:01.725187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.635 [2024-07-22 23:25:01.725208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.635 [2024-07-22 23:25:01.725224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.635 [2024-07-22 23:25:01.725265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.635 qpair failed and we were unable to recover it. 00:44:25.635 [2024-07-22 23:25:01.735066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.635 [2024-07-22 23:25:01.735218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.635 [2024-07-22 23:25:01.735253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.735272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.735289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.735344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.745139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.745254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.745288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.745317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.745337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.745379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.755175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.755294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.755338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.755359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.755378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.755419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.765177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.765334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.765369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.765389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.765406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.765447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.775199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.775328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.775364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.775384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.775401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.775442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.785225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.785374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.785410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.785430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.785448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.785491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.795236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.795370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.795405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.795432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.795450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.795491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.805295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.805434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.805468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.805488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.805505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.805545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.815332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.815463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.815499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.815519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.815536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.815577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.825383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.825540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.825577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.825597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.825615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.825656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.835385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.835509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.835542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.835562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.835578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.835619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.845421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.845543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.845578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.845598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.845614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.845655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.855416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.855562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.855597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.636 [2024-07-22 23:25:01.855616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.636 [2024-07-22 23:25:01.855633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.636 [2024-07-22 23:25:01.855674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.636 qpair failed and we were unable to recover it. 00:44:25.636 [2024-07-22 23:25:01.865477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.636 [2024-07-22 23:25:01.865617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.636 [2024-07-22 23:25:01.865651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.865670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.865687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.865729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.637 [2024-07-22 23:25:01.875517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.637 [2024-07-22 23:25:01.875658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.637 [2024-07-22 23:25:01.875691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.875710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.875728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.875768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.637 [2024-07-22 23:25:01.885570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.637 [2024-07-22 23:25:01.885736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.637 [2024-07-22 23:25:01.885787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.885809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.885827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.885868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.637 [2024-07-22 23:25:01.895549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.637 [2024-07-22 23:25:01.895665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.637 [2024-07-22 23:25:01.895699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.895719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.895736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.895776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.637 [2024-07-22 23:25:01.905657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.637 [2024-07-22 23:25:01.905775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.637 [2024-07-22 23:25:01.905809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.905829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.905846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.905887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.637 [2024-07-22 23:25:01.915615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.637 [2024-07-22 23:25:01.915737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.637 [2024-07-22 23:25:01.915772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.915791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.915808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.915849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.637 [2024-07-22 23:25:01.925646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.637 [2024-07-22 23:25:01.925793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.637 [2024-07-22 23:25:01.925827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.925846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.925863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.925911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.637 [2024-07-22 23:25:01.935676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.637 [2024-07-22 23:25:01.935824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.637 [2024-07-22 23:25:01.935858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.637 [2024-07-22 23:25:01.935877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.637 [2024-07-22 23:25:01.935894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.637 [2024-07-22 23:25:01.935935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.637 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:01.945701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:01.945826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:01.945865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:01.945889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:01.945909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:01.945950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:01.955739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:01.955860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:01.955894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:01.955913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:01.955931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:01.955971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:01.965865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:01.965993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:01.966027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:01.966046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:01.966063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:01.966104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:01.975863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:01.975991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:01.976032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:01.976053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:01.976070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:01.976110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:01.985872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:01.985993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:01.986028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:01.986047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:01.986064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:01.986105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:01.995868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:01.995991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:01.996025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:01.996044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:01.996061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:01.996102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:02.005921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:02.006044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:02.006078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:02.006097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:02.006114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:02.006154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:02.015939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:02.016075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:02.016109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:02.016128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:02.016151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:02.016192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:02.026007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:02.026123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:02.026157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:02.026177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:02.026193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:02.026233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:02.036048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:02.036180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:02.036216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:02.036235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.899 [2024-07-22 23:25:02.036252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.899 [2024-07-22 23:25:02.036292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.899 qpair failed and we were unable to recover it. 00:44:25.899 [2024-07-22 23:25:02.046081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.899 [2024-07-22 23:25:02.046207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.899 [2024-07-22 23:25:02.046241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.899 [2024-07-22 23:25:02.046260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.046277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.046327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.056063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.056214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.056248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.056267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.056283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.056335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.066078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.066222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.066257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.066277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.066294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.066346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.076123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.076265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.076299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.076332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.076351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.076392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.086201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.086360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.086394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.086413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.086431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.086472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.096356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.096480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.096514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.096533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.096550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.096590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.106210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.106342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.106377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.106397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.106425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.106468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.116252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.116384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.116418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.116438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.116455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.116496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.126272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.126405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.126439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.126459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.126476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.126517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.136297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.136440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.136475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.136495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.136512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.136553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.146318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.146433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.146467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.146487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.146504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.146544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.156338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.156459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.156494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.156512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.156529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.156570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.166404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.166529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.166563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.166582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.166598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.900 [2024-07-22 23:25:02.166638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.900 qpair failed and we were unable to recover it. 00:44:25.900 [2024-07-22 23:25:02.176402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.900 [2024-07-22 23:25:02.176538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.900 [2024-07-22 23:25:02.176572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.900 [2024-07-22 23:25:02.176591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.900 [2024-07-22 23:25:02.176608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.901 [2024-07-22 23:25:02.176650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.901 qpair failed and we were unable to recover it. 00:44:25.901 [2024-07-22 23:25:02.186447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.901 [2024-07-22 23:25:02.186572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.901 [2024-07-22 23:25:02.186606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.901 [2024-07-22 23:25:02.186625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.901 [2024-07-22 23:25:02.186642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.901 [2024-07-22 23:25:02.186683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.901 qpair failed and we were unable to recover it. 00:44:25.901 [2024-07-22 23:25:02.196460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.901 [2024-07-22 23:25:02.196588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.901 [2024-07-22 23:25:02.196621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.901 [2024-07-22 23:25:02.196648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.901 [2024-07-22 23:25:02.196666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.901 [2024-07-22 23:25:02.196707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.901 qpair failed and we were unable to recover it. 00:44:25.901 [2024-07-22 23:25:02.206475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:25.901 [2024-07-22 23:25:02.206604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:25.901 [2024-07-22 23:25:02.206640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:25.901 [2024-07-22 23:25:02.206663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:25.901 [2024-07-22 23:25:02.206680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:25.901 [2024-07-22 23:25:02.206722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:25.901 qpair failed and we were unable to recover it. 00:44:26.162 [2024-07-22 23:25:02.216498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.162 [2024-07-22 23:25:02.216631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.162 [2024-07-22 23:25:02.216666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.162 [2024-07-22 23:25:02.216687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.162 [2024-07-22 23:25:02.216704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.162 [2024-07-22 23:25:02.216745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.162 qpair failed and we were unable to recover it. 00:44:26.162 [2024-07-22 23:25:02.226555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.162 [2024-07-22 23:25:02.226669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.162 [2024-07-22 23:25:02.226704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.162 [2024-07-22 23:25:02.226724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.162 [2024-07-22 23:25:02.226740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.162 [2024-07-22 23:25:02.226781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.162 qpair failed and we were unable to recover it. 00:44:26.162 [2024-07-22 23:25:02.236576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.162 [2024-07-22 23:25:02.236687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.162 [2024-07-22 23:25:02.236721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.162 [2024-07-22 23:25:02.236740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.162 [2024-07-22 23:25:02.236756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.162 [2024-07-22 23:25:02.236796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.162 qpair failed and we were unable to recover it. 00:44:26.162 [2024-07-22 23:25:02.246660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.246809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.246844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.246863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.246880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.246921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.256653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.256776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.256810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.256829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.256846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.256887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.266692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.266812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.266846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.266866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.266883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.266923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.276713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.276851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.276885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.276904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.276921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.276962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.286747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.286875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.286916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.286938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.286955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.286996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.296764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.296884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.296918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.296937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.296954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.296994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.306804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.306952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.306986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.307005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.307022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.307063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.316786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.316911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.316944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.316964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.316981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.317021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.326860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.326989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.327023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.327042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.327059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.327106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.336845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.336967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.337001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.337020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.337038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.337078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.346934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.347068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.347102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.347122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.347139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.347180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.357002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.357125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.357160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.357179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.357197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.357237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.367008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.367175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.367211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.163 [2024-07-22 23:25:02.367230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.163 [2024-07-22 23:25:02.367247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.163 [2024-07-22 23:25:02.367289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.163 qpair failed and we were unable to recover it. 00:44:26.163 [2024-07-22 23:25:02.377017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.163 [2024-07-22 23:25:02.377140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.163 [2024-07-22 23:25:02.377182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.377203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.377220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.377261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.387006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.387124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.387159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.387178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.387195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.387235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.397061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.397193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.397228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.397247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.397264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.397304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.407125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.407246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.407280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.407299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.407327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.407369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.417112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.417234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.417269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.417288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.417305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.417365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.427203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.427337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.427371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.427390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.427408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.427449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.437191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.437316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.437351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.437371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.437388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.437429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.447227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.447375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.447410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.447429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.447446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.447486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.457262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.457390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.457425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.457445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.457461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.457502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.164 [2024-07-22 23:25:02.467327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.164 [2024-07-22 23:25:02.467489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.164 [2024-07-22 23:25:02.467527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.164 [2024-07-22 23:25:02.467547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.164 [2024-07-22 23:25:02.467565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.164 [2024-07-22 23:25:02.467605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.164 qpair failed and we were unable to recover it. 00:44:26.425 [2024-07-22 23:25:02.477304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.425 [2024-07-22 23:25:02.477442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.477477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.477497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.477514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.477562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.487375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.487498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.487532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.487551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.487569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.487611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.497396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.497515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.497549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.497569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.497586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.497626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.507383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.507506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.507541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.507560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.507585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.507626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.517429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.517548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.517582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.517601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.517619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.517659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.527521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.527648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.527683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.527702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.527720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.527761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.537530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.537677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.537713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.537733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.537750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.537791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.547536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.547685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.547722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.547743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.547761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.547801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.557541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.557661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.557695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.557714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.557731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.557772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.567620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.567747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.567781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.567800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.567817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.567857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.577623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.577745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.577779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.577798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.577814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.577857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.587631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.426 [2024-07-22 23:25:02.587780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.426 [2024-07-22 23:25:02.587814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.426 [2024-07-22 23:25:02.587833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.426 [2024-07-22 23:25:02.587850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.426 [2024-07-22 23:25:02.587893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.426 qpair failed and we were unable to recover it. 00:44:26.426 [2024-07-22 23:25:02.597629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.597742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.597775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.597802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.597820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.597861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.607720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.607874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.607908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.607927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.607944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.607986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.617709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.617827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.617860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.617879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.617896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.617936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.627879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.628003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.628036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.628055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.628072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.628114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.637764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.637887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.637921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.637940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.637957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.637997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.647829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.647956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.647989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.648008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.648025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.648066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.657817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.657936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.657969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.657988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.658004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.658045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.667893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.668016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.668051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.668070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.668087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.668127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.677896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.678017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.678051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.678070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.678087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.678127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.687999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.688124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.688158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.688185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.688203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.688243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.697985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.698110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.698144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.698163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.698179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.698221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.707987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.708107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.708140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.708160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.708177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.708216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.717985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.718100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.718135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.427 [2024-07-22 23:25:02.718155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.427 [2024-07-22 23:25:02.718172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.427 [2024-07-22 23:25:02.718212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.427 qpair failed and we were unable to recover it. 00:44:26.427 [2024-07-22 23:25:02.728044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.427 [2024-07-22 23:25:02.728174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.427 [2024-07-22 23:25:02.728207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.428 [2024-07-22 23:25:02.728226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.428 [2024-07-22 23:25:02.728243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.428 [2024-07-22 23:25:02.728284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.428 qpair failed and we were unable to recover it. 00:44:26.689 [2024-07-22 23:25:02.738042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.689 [2024-07-22 23:25:02.738162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.689 [2024-07-22 23:25:02.738197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.689 [2024-07-22 23:25:02.738216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.689 [2024-07-22 23:25:02.738234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.689 [2024-07-22 23:25:02.738276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.689 qpair failed and we were unable to recover it. 00:44:26.689 [2024-07-22 23:25:02.748081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.689 [2024-07-22 23:25:02.748203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.689 [2024-07-22 23:25:02.748236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.689 [2024-07-22 23:25:02.748255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.689 [2024-07-22 23:25:02.748272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.689 [2024-07-22 23:25:02.748323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.689 qpair failed and we were unable to recover it. 00:44:26.689 [2024-07-22 23:25:02.758111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.689 [2024-07-22 23:25:02.758231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.689 [2024-07-22 23:25:02.758268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.689 [2024-07-22 23:25:02.758288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.689 [2024-07-22 23:25:02.758304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.689 [2024-07-22 23:25:02.758355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.689 qpair failed and we were unable to recover it. 00:44:26.689 [2024-07-22 23:25:02.768225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.689 [2024-07-22 23:25:02.768364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.689 [2024-07-22 23:25:02.768398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.689 [2024-07-22 23:25:02.768416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.689 [2024-07-22 23:25:02.768433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.689 [2024-07-22 23:25:02.768474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.689 qpair failed and we were unable to recover it. 00:44:26.689 [2024-07-22 23:25:02.778201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.689 [2024-07-22 23:25:02.778324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.689 [2024-07-22 23:25:02.778367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.689 [2024-07-22 23:25:02.778389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.689 [2024-07-22 23:25:02.778406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.689 [2024-07-22 23:25:02.778446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.689 qpair failed and we were unable to recover it. 00:44:26.689 [2024-07-22 23:25:02.788240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.689 [2024-07-22 23:25:02.788404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.689 [2024-07-22 23:25:02.788439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.689 [2024-07-22 23:25:02.788458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.689 [2024-07-22 23:25:02.788474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.689 [2024-07-22 23:25:02.788517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.689 qpair failed and we were unable to recover it. 00:44:26.689 [2024-07-22 23:25:02.798238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.689 [2024-07-22 23:25:02.798367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.798405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.798426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.798442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.798484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.808321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.808458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.808494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.808514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.808531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.808571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.818284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.818427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.818463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.818482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.818499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.818547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.828397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.828552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.828588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.828608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.828624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.828665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.838351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.838473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.838508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.838528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.838544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.838584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.848418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.848545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.848581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.848600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.848617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.848657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.858428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.858548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.858581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.858600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.858617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.858657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.868449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.868571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.868616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.868637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.868653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.868694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.878477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.878638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.878671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.878691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.878708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.878748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.888602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.888726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.888759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.888779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.888796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.888837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.898601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.898772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.898806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.898825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.898842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.898882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.908597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.908742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.908775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.908795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.908818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.908860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.690 [2024-07-22 23:25:02.918636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.690 [2024-07-22 23:25:02.918766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.690 [2024-07-22 23:25:02.918800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.690 [2024-07-22 23:25:02.918820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.690 [2024-07-22 23:25:02.918837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.690 [2024-07-22 23:25:02.918877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.690 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.928645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.928767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.928800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.928819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.928835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.691 [2024-07-22 23:25:02.928875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.691 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.938692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.938811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.938844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.938865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.938882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.691 [2024-07-22 23:25:02.938922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.691 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.948736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.948849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.948883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.948902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.948919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.691 [2024-07-22 23:25:02.948959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.691 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.958768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.958893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.958926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.958945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.958961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.691 [2024-07-22 23:25:02.959002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.691 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.968758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.968886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.968919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.968938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.968955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.691 [2024-07-22 23:25:02.968995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.691 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.978762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.978881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.978914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.978934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.978951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.691 [2024-07-22 23:25:02.978991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.691 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.988990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.989129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.989163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.989181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.989198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.691 [2024-07-22 23:25:02.989239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.691 qpair failed and we were unable to recover it. 00:44:26.691 [2024-07-22 23:25:02.998858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.691 [2024-07-22 23:25:02.998977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.691 [2024-07-22 23:25:02.999011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.691 [2024-07-22 23:25:02.999038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.691 [2024-07-22 23:25:02.999056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:02.999096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.008918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.009045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.009079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.009099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.009115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.009157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.018909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.019029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.019062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.019081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.019098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.019139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.028941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.029066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.029100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.029119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.029136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.029177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.038955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.039075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.039110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.039130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.039147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.039188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.049015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.049140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.049174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.049192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.049209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.049249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.059012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.059159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.059192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.059211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.059228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.059268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.069082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.069206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.069239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.069258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.069274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.069324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.079087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.079233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.079266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.079285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.079301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.079355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.089151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.089291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.089334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.089371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.089389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.089431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.099108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.099230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.099266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.099285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.099302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.099356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.109158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.109278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.109323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.109346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.109363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.109404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.119206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.119336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.119371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.119390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.119406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.119447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.129264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.129399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.129434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.129453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.129470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.957 [2024-07-22 23:25:03.129510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.957 qpair failed and we were unable to recover it. 00:44:26.957 [2024-07-22 23:25:03.139257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.957 [2024-07-22 23:25:03.139438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.957 [2024-07-22 23:25:03.139473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.957 [2024-07-22 23:25:03.139493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.957 [2024-07-22 23:25:03.139510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.139550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.149284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.149430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.149464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.149482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.149499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.149540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.159336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.159454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.159488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.159507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.159524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.159566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.169398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.169569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.169603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.169622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.169639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.169680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.179407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.179524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.179563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.179585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.179602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.179642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.189402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.189522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.189556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.189575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.189593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.189635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.199435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.199551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.199585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.199605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.199622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.199662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.209463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.209623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.209657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.209677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.209694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.209735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.219510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.219658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.219692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.219711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.219729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.219776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.229539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.229662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.229696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.229715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.229733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.229773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.239622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.239779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.239812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.239832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.239849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.239889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.249636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.249817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.249850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.249870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.249889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.249930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:26.958 [2024-07-22 23:25:03.259608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:26.958 [2024-07-22 23:25:03.259728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:26.958 [2024-07-22 23:25:03.259762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:26.958 [2024-07-22 23:25:03.259782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:26.958 [2024-07-22 23:25:03.259799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:26.958 [2024-07-22 23:25:03.259839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:26.958 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.269742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.269870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.269911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.269931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.269948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.269989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.279682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.279807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.279841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.279860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.279877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.279918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.289739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.289928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.289962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.289981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.289998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.290040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.299743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.299870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.299904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.299923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.299940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.299980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.309779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.309896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.309929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.309948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.309972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.310014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.319868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.320025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.320060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.320078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.320098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.320140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.329851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.329979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.330013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.330033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.330050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.330090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.339850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.339968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.340002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.340021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.340038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.340078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.349886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.350004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.350037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.350057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.350074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.350114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.359934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.360075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.360108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.360127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.360144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.360184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.369957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.370101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.370136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.370155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.370171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.370212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.380012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.380179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.380213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.380232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.380250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.380291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.390015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.390141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.390175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.390193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.390210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.390252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.400063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.400206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.400240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.400260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.400284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.400337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.410051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.410175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.410208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.410227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.410244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.410283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.420090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.420226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.420261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.420280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.420297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.420349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.430102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.430224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.430258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.430278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.430295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.430343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.440110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.440287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.440334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.440355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.440373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.440414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.450182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.450318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.450353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.450372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.450389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.450432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.460245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.460376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.460411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.460429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.460446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.460487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.470281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.470418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.470452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.470471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.470488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.470529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.480272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.480404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.480439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.480458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.480475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.480516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.490378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.490522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.218 [2024-07-22 23:25:03.490555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.218 [2024-07-22 23:25:03.490581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.218 [2024-07-22 23:25:03.490599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.218 [2024-07-22 23:25:03.490641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.218 qpair failed and we were unable to recover it. 00:44:27.218 [2024-07-22 23:25:03.500327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.218 [2024-07-22 23:25:03.500458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.219 [2024-07-22 23:25:03.500492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.219 [2024-07-22 23:25:03.500511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.219 [2024-07-22 23:25:03.500528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.219 [2024-07-22 23:25:03.500570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.219 qpair failed and we were unable to recover it. 00:44:27.219 [2024-07-22 23:25:03.510369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.219 [2024-07-22 23:25:03.510559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.219 [2024-07-22 23:25:03.510595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.219 [2024-07-22 23:25:03.510615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.219 [2024-07-22 23:25:03.510632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.219 [2024-07-22 23:25:03.510673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.219 qpair failed and we were unable to recover it. 00:44:27.219 [2024-07-22 23:25:03.520394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.219 [2024-07-22 23:25:03.520528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.219 [2024-07-22 23:25:03.520561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.219 [2024-07-22 23:25:03.520580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.219 [2024-07-22 23:25:03.520598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.219 [2024-07-22 23:25:03.520639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.219 qpair failed and we were unable to recover it. 00:44:27.479 [2024-07-22 23:25:03.530484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.479 [2024-07-22 23:25:03.530612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.479 [2024-07-22 23:25:03.530647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.479 [2024-07-22 23:25:03.530667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.479 [2024-07-22 23:25:03.530684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.479 [2024-07-22 23:25:03.530725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.479 qpair failed and we were unable to recover it. 00:44:27.479 [2024-07-22 23:25:03.540441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.479 [2024-07-22 23:25:03.540576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.479 [2024-07-22 23:25:03.540611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.479 [2024-07-22 23:25:03.540632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.479 [2024-07-22 23:25:03.540648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.479 [2024-07-22 23:25:03.540689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.479 qpair failed and we were unable to recover it. 00:44:27.479 [2024-07-22 23:25:03.550492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.479 [2024-07-22 23:25:03.550638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.479 [2024-07-22 23:25:03.550672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.550691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.550707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.550748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.560567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.560686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.560720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.560740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.560757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.560797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.570554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.570683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.570716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.570736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.570753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.570793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.580575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.580702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.580743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.580763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.580780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.580820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.590570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.590690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.590723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.590743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.590759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.590800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.600640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.600762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.600795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.600814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.600831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.600871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.610703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.610830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.610864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.610883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.610900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.610941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.620645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.620793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.620828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.620848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.620865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.620917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.630685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.630800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.630833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.630853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.630870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.630910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.640733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.640849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.640883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.640902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.640919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.640958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.650806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.650973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.651007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.651026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.651043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.651084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.660790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.660908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.660941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.660960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.660977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.661017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.670801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.670919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.670959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.670980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.670997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.480 [2024-07-22 23:25:03.671037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.480 qpair failed and we were unable to recover it. 00:44:27.480 [2024-07-22 23:25:03.680843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.480 [2024-07-22 23:25:03.680957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.480 [2024-07-22 23:25:03.680989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.480 [2024-07-22 23:25:03.681009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.480 [2024-07-22 23:25:03.681026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.681066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.690898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.691026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.691059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.691078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.691095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.691135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.700913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.701060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.701094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.701113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.701130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.701171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.710932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.711078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.711112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.711131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.711154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.711197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.720966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.721080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.721114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.721133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.721149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.721190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.731017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.731186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.731219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.731238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.731255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.731296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.741011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.741179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.741213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.741233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.741250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.741291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.751016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.751138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.751171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.751191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.751208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.751249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.761096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.761222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.761256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.761275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.761293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.761345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.771119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.771239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.771272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.771292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.771316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.771359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.481 [2024-07-22 23:25:03.781125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.481 [2024-07-22 23:25:03.781248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.481 [2024-07-22 23:25:03.781282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.481 [2024-07-22 23:25:03.781301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.481 [2024-07-22 23:25:03.781331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.481 [2024-07-22 23:25:03.781373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.481 qpair failed and we were unable to recover it. 00:44:27.741 [2024-07-22 23:25:03.791196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.741 [2024-07-22 23:25:03.791338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.741 [2024-07-22 23:25:03.791374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.741 [2024-07-22 23:25:03.791394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.741 [2024-07-22 23:25:03.791411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.741 [2024-07-22 23:25:03.791452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.741 qpair failed and we were unable to recover it. 00:44:27.741 [2024-07-22 23:25:03.801271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.741 [2024-07-22 23:25:03.801420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.741 [2024-07-22 23:25:03.801455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.741 [2024-07-22 23:25:03.801475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.741 [2024-07-22 23:25:03.801499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.741 [2024-07-22 23:25:03.801542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.741 qpair failed and we were unable to recover it. 00:44:27.741 [2024-07-22 23:25:03.811296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.741 [2024-07-22 23:25:03.811454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.741 [2024-07-22 23:25:03.811489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.741 [2024-07-22 23:25:03.811509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.741 [2024-07-22 23:25:03.811526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.741 [2024-07-22 23:25:03.811566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.741 qpair failed and we were unable to recover it. 00:44:27.741 [2024-07-22 23:25:03.821303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.741 [2024-07-22 23:25:03.821464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.741 [2024-07-22 23:25:03.821499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.741 [2024-07-22 23:25:03.821517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.741 [2024-07-22 23:25:03.821534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.741 [2024-07-22 23:25:03.821576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.741 qpair failed and we were unable to recover it. 00:44:27.741 [2024-07-22 23:25:03.831294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.741 [2024-07-22 23:25:03.831442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.741 [2024-07-22 23:25:03.831477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.741 [2024-07-22 23:25:03.831495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.741 [2024-07-22 23:25:03.831512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.741 [2024-07-22 23:25:03.831553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.741 qpair failed and we were unable to recover it. 00:44:27.741 [2024-07-22 23:25:03.841344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.741 [2024-07-22 23:25:03.841462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.741 [2024-07-22 23:25:03.841496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.741 [2024-07-22 23:25:03.841515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.741 [2024-07-22 23:25:03.841532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.841572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.851410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.851532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.851566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.851587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.851604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.851645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.861423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.861548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.861582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.861601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.861618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.861659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.871396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.871548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.871581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.871601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.871618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.871659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.881494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.881639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.881674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.881692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.881710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.881750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.891482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.891647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.891681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.891708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.891727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.891769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.901500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.901625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.901659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.901679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.901696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.901737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.911513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.911631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.911664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.911683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.911700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.911741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.921639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.921761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.921794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.921814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.921830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.921870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.931649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.931814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.931847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.931866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.931882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.931923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.941618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.941761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.941795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.941814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.941830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.941870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.951681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.951829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.951862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.951880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.951897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.951938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.961683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.961798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.961832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.961852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.961869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.961908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.742 qpair failed and we were unable to recover it. 00:44:27.742 [2024-07-22 23:25:03.971737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.742 [2024-07-22 23:25:03.971862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.742 [2024-07-22 23:25:03.971896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.742 [2024-07-22 23:25:03.971916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.742 [2024-07-22 23:25:03.971932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.742 [2024-07-22 23:25:03.971972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:27.743 [2024-07-22 23:25:03.981788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.743 [2024-07-22 23:25:03.981929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.743 [2024-07-22 23:25:03.981969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.743 [2024-07-22 23:25:03.981989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.743 [2024-07-22 23:25:03.982007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.743 [2024-07-22 23:25:03.982047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:27.743 [2024-07-22 23:25:03.991741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.743 [2024-07-22 23:25:03.991869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.743 [2024-07-22 23:25:03.991902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.743 [2024-07-22 23:25:03.991922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.743 [2024-07-22 23:25:03.991939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.743 [2024-07-22 23:25:03.991988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:27.743 [2024-07-22 23:25:04.001777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.743 [2024-07-22 23:25:04.001931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.743 [2024-07-22 23:25:04.001965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.743 [2024-07-22 23:25:04.001985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.743 [2024-07-22 23:25:04.002001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.743 [2024-07-22 23:25:04.002042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:27.743 [2024-07-22 23:25:04.011808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.743 [2024-07-22 23:25:04.011933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.743 [2024-07-22 23:25:04.011966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.743 [2024-07-22 23:25:04.011985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.743 [2024-07-22 23:25:04.012003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.743 [2024-07-22 23:25:04.012043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:27.743 [2024-07-22 23:25:04.021835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.743 [2024-07-22 23:25:04.021958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.743 [2024-07-22 23:25:04.021994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.743 [2024-07-22 23:25:04.022014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.743 [2024-07-22 23:25:04.022032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.743 [2024-07-22 23:25:04.022080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:27.743 [2024-07-22 23:25:04.031868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.743 [2024-07-22 23:25:04.031978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.743 [2024-07-22 23:25:04.032011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.743 [2024-07-22 23:25:04.032029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.743 [2024-07-22 23:25:04.032046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.743 [2024-07-22 23:25:04.032098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:27.743 [2024-07-22 23:25:04.041894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:27.743 [2024-07-22 23:25:04.042049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:27.743 [2024-07-22 23:25:04.042087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:27.743 [2024-07-22 23:25:04.042108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:27.743 [2024-07-22 23:25:04.042126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:27.743 [2024-07-22 23:25:04.042166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:27.743 qpair failed and we were unable to recover it. 00:44:28.002 [2024-07-22 23:25:04.051997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.002 [2024-07-22 23:25:04.052139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.002 [2024-07-22 23:25:04.052180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.002 [2024-07-22 23:25:04.052206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.002 [2024-07-22 23:25:04.052225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:28.002 [2024-07-22 23:25:04.052271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:28.002 qpair failed and we were unable to recover it. 00:44:28.002 [2024-07-22 23:25:04.061959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.002 [2024-07-22 23:25:04.062113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.002 [2024-07-22 23:25:04.062148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.002 [2024-07-22 23:25:04.062168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.002 [2024-07-22 23:25:04.062185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8748000b90 00:44:28.002 [2024-07-22 23:25:04.062227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:28.002 qpair failed and we were unable to recover it. 00:44:28.002 [2024-07-22 23:25:04.072387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.002 [2024-07-22 23:25:04.072514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.002 [2024-07-22 23:25:04.072599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.002 [2024-07-22 23:25:04.072643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.002 [2024-07-22 23:25:04.072675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8740000b90 00:44:28.002 [2024-07-22 23:25:04.072752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:28.002 qpair failed and we were unable to recover it. 00:44:28.002 [2024-07-22 23:25:04.082448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.002 [2024-07-22 23:25:04.082690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.002 [2024-07-22 23:25:04.082760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.002 [2024-07-22 23:25:04.082798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.002 [2024-07-22 23:25:04.082828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8740000b90 00:44:28.002 [2024-07-22 23:25:04.082903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:28.002 qpair failed and we were unable to recover it. 00:44:28.002 [2024-07-22 23:25:04.092454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.002 [2024-07-22 23:25:04.092722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.002 [2024-07-22 23:25:04.092802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.003 [2024-07-22 23:25:04.092841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.003 [2024-07-22 23:25:04.092872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8738000b90 00:44:28.003 [2024-07-22 23:25:04.092951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:44:28.003 qpair failed and we were unable to recover it. 00:44:28.003 [2024-07-22 23:25:04.102510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.003 [2024-07-22 23:25:04.102764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.003 [2024-07-22 23:25:04.102837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.003 [2024-07-22 23:25:04.102874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.003 [2024-07-22 23:25:04.102907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8738000b90 00:44:28.003 [2024-07-22 23:25:04.102984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:44:28.003 qpair failed and we were unable to recover it. 00:44:28.003 [2024-07-22 23:25:04.103238] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:44:28.003 A controller has encountered a failure and is being reset. 00:44:28.003 [2024-07-22 23:25:04.112449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.003 [2024-07-22 23:25:04.112797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.003 [2024-07-22 23:25:04.112877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.003 [2024-07-22 23:25:04.112931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.003 [2024-07-22 23:25:04.112964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x79bbb0 00:44:28.003 [2024-07-22 23:25:04.113038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:44:28.003 qpair failed and we were unable to recover it. 00:44:28.003 [2024-07-22 23:25:04.122435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:28.003 [2024-07-22 23:25:04.122551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:28.003 [2024-07-22 23:25:04.122586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:28.003 [2024-07-22 23:25:04.122605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:28.003 [2024-07-22 23:25:04.122622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x79bbb0 00:44:28.003 [2024-07-22 23:25:04.122691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:44:28.003 qpair failed and we were unable to recover it. 00:44:28.003 [2024-07-22 23:25:04.122885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9b70 (9): Bad file descriptor 00:44:28.003 Controller properly reset. 00:44:28.003 Initializing NVMe Controllers 00:44:28.003 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:28.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:28.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:44:28.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:44:28.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:44:28.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:44:28.003 Initialization complete. Launching workers. 00:44:28.003 Starting thread on core 1 00:44:28.003 Starting thread on core 2 00:44:28.003 Starting thread on core 3 00:44:28.003 Starting thread on core 0 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:44:28.003 00:44:28.003 real 0m11.283s 00:44:28.003 user 0m19.893s 00:44:28.003 sys 0m5.741s 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:28.003 ************************************ 00:44:28.003 END TEST nvmf_target_disconnect_tc2 00:44:28.003 ************************************ 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:28.003 rmmod nvme_tcp 00:44:28.003 rmmod nvme_fabrics 00:44:28.003 rmmod nvme_keyring 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1070906 ']' 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1070906 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1070906 ']' 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1070906 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:28.003 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1070906 00:44:28.262 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:44:28.262 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:44:28.262 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1070906' 00:44:28.262 killing process with pid 1070906 00:44:28.262 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1070906 00:44:28.262 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1070906 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:28.522 23:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:31.058 23:25:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:31.058 00:44:31.058 real 0m17.728s 00:44:31.058 user 0m46.745s 00:44:31.058 sys 0m9.066s 00:44:31.058 23:25:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:31.058 23:25:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:44:31.058 ************************************ 00:44:31.058 END TEST nvmf_target_disconnect 00:44:31.058 ************************************ 00:44:31.058 23:25:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:44:31.058 23:25:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:44:31.058 00:44:31.058 real 8m21.507s 00:44:31.058 user 20m50.283s 00:44:31.058 sys 2m0.016s 00:44:31.058 23:25:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:31.058 23:25:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:44:31.058 ************************************ 00:44:31.058 END TEST nvmf_host 00:44:31.058 ************************************ 00:44:31.058 23:25:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:44:31.058 00:44:31.058 real 33m31.790s 00:44:31.058 user 89m8.214s 00:44:31.058 sys 8m11.327s 00:44:31.058 23:25:06 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:31.058 23:25:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:31.058 ************************************ 00:44:31.058 END TEST nvmf_tcp 00:44:31.058 ************************************ 00:44:31.058 23:25:06 -- common/autotest_common.sh@1142 -- # return 0 00:44:31.058 23:25:06 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:44:31.058 23:25:06 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:31.058 23:25:06 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:44:31.058 23:25:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:31.058 23:25:06 -- common/autotest_common.sh@10 -- # set +x 00:44:31.058 ************************************ 00:44:31.058 START TEST spdkcli_nvmf_tcp 00:44:31.058 ************************************ 00:44:31.058 23:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:31.058 * Looking for test storage... 00:44:31.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:31.058 23:25:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1072083 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1072083 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1072083 ']' 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:31.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:31.059 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:31.059 [2024-07-22 23:25:07.156190] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:44:31.059 [2024-07-22 23:25:07.156404] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072083 ] 00:44:31.059 EAL: No free 2048 kB hugepages reported on node 1 00:44:31.059 [2024-07-22 23:25:07.271711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:31.318 [2024-07-22 23:25:07.385340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:31.318 [2024-07-22 23:25:07.385354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:31.318 23:25:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:31.318 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:31.318 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:31.318 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:31.318 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:31.318 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:31.318 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:31.318 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:31.318 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:31.318 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:31.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:31.318 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:31.318 ' 00:44:34.606 [2024-07-22 23:25:10.435190] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:35.546 [2024-07-22 23:25:11.704096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:38.085 [2024-07-22 23:25:14.079887] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:39.993 [2024-07-22 23:25:16.114640] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:41.371 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:41.371 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:41.371 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:41.371 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:41.371 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:41.372 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:41.372 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:41.372 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:41.372 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:41.372 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:41.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:41.372 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:41.630 23:25:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:42.200 23:25:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:42.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:42.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:42.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:42.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:42.200 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:42.200 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:42.200 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:42.200 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:42.200 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:42.200 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:42.200 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:42.200 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:42.200 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:42.200 ' 00:44:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:48.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:48.788 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:48.788 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:48.788 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:48.788 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:48.788 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:48.788 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:48.788 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:48.788 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1072083 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1072083 ']' 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1072083 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:48.788 23:25:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1072083 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1072083' 00:44:48.788 killing process with pid 1072083 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1072083 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1072083 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1072083 ']' 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1072083 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1072083 ']' 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1072083 00:44:48.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1072083) - No such process 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1072083 is not found' 00:44:48.788 Process with pid 1072083 is not found 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:48.788 00:44:48.788 real 0m17.383s 00:44:48.788 user 0m37.440s 00:44:48.788 sys 0m1.141s 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:48.788 23:25:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:48.788 ************************************ 00:44:48.788 END TEST spdkcli_nvmf_tcp 00:44:48.788 ************************************ 00:44:48.788 23:25:24 -- common/autotest_common.sh@1142 -- # return 0 00:44:48.788 23:25:24 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:48.788 23:25:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:44:48.788 23:25:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:48.788 23:25:24 -- common/autotest_common.sh@10 -- # set +x 00:44:48.788 ************************************ 00:44:48.788 START TEST nvmf_identify_passthru 00:44:48.788 ************************************ 00:44:48.788 23:25:24 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:48.788 * Looking for test storage... 00:44:48.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:48.788 23:25:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:48.788 23:25:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:48.788 23:25:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:48.788 23:25:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:48.788 23:25:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.788 23:25:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.788 23:25:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.788 23:25:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:48.788 23:25:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:48.788 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:48.788 23:25:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:48.788 23:25:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:48.788 23:25:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:48.788 23:25:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:48.788 23:25:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.788 23:25:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.789 23:25:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.789 23:25:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:48.789 23:25:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.789 23:25:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:48.789 23:25:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:48.789 23:25:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:48.789 23:25:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:44:48.789 23:25:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:44:51.324 Found 0000:84:00.0 (0x8086 - 0x159b) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:44:51.324 Found 0000:84:00.1 (0x8086 - 0x159b) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:44:51.324 Found net devices under 0000:84:00.0: cvl_0_0 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:44:51.324 Found net devices under 0000:84:00.1: cvl_0_1 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:51.324 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:51.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:51.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:44:51.585 00:44:51.585 --- 10.0.0.2 ping statistics --- 00:44:51.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.585 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:51.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:51.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:44:51.585 00:44:51.585 --- 10.0.0.1 ping statistics --- 00:44:51.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.585 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:51.585 23:25:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:51.585 23:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:51.585 23:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:44:51.585 23:25:27 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:44:51.585 23:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:44:51.585 23:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:44:51.585 23:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:44:51.585 23:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:51.585 23:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:51.845 EAL: No free 2048 kB hugepages reported on node 1 00:44:56.037 23:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:44:56.037 23:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:44:56.037 23:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:56.037 23:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:56.037 EAL: No free 2048 kB hugepages reported on node 1 00:45:00.232 23:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:45:00.232 23:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:45:00.232 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:00.232 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:00.491 23:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:00.491 23:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1076711 00:45:00.491 23:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:45:00.491 23:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:00.491 23:25:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1076711 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1076711 ']' 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:00.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:00.491 23:25:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:00.491 [2024-07-22 23:25:36.673032] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:45:00.491 [2024-07-22 23:25:36.673207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:00.491 EAL: No free 2048 kB hugepages reported on node 1 00:45:00.749 [2024-07-22 23:25:36.825459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:00.749 [2024-07-22 23:25:36.977552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:00.749 [2024-07-22 23:25:36.977666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:00.749 [2024-07-22 23:25:36.977703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:00.749 [2024-07-22 23:25:36.977741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:00.749 [2024-07-22 23:25:36.977768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:00.749 [2024-07-22 23:25:36.977927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:00.749 [2024-07-22 23:25:36.977992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:45:00.749 [2024-07-22 23:25:36.978078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:45:00.749 [2024-07-22 23:25:36.978084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:45:01.007 23:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.007 INFO: Log level set to 20 00:45:01.007 INFO: Requests: 00:45:01.007 { 00:45:01.007 "jsonrpc": "2.0", 00:45:01.007 "method": "nvmf_set_config", 00:45:01.007 "id": 1, 00:45:01.007 "params": { 00:45:01.007 "admin_cmd_passthru": { 00:45:01.007 "identify_ctrlr": true 00:45:01.007 } 00:45:01.007 } 00:45:01.007 } 00:45:01.007 00:45:01.007 INFO: response: 00:45:01.007 { 00:45:01.007 "jsonrpc": "2.0", 00:45:01.007 "id": 1, 00:45:01.007 "result": true 00:45:01.007 } 00:45:01.007 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:01.007 23:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.007 INFO: Setting log level to 20 00:45:01.007 INFO: Setting log level to 20 00:45:01.007 INFO: Log level set to 20 00:45:01.007 INFO: Log level set to 20 00:45:01.007 INFO: Requests: 00:45:01.007 { 00:45:01.007 "jsonrpc": "2.0", 00:45:01.007 "method": "framework_start_init", 00:45:01.007 "id": 1 00:45:01.007 } 00:45:01.007 00:45:01.007 INFO: Requests: 00:45:01.007 { 00:45:01.007 "jsonrpc": "2.0", 00:45:01.007 "method": "framework_start_init", 00:45:01.007 "id": 1 00:45:01.007 } 00:45:01.007 00:45:01.007 [2024-07-22 23:25:37.226896] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:45:01.007 INFO: response: 00:45:01.007 { 00:45:01.007 "jsonrpc": "2.0", 00:45:01.007 "id": 1, 00:45:01.007 "result": true 00:45:01.007 } 00:45:01.007 00:45:01.007 INFO: response: 00:45:01.007 { 00:45:01.007 "jsonrpc": "2.0", 00:45:01.007 "id": 1, 00:45:01.007 "result": true 00:45:01.007 } 00:45:01.007 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:01.007 23:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.007 INFO: Setting log level to 40 00:45:01.007 INFO: Setting log level to 40 00:45:01.007 INFO: Setting log level to 40 00:45:01.007 [2024-07-22 23:25:37.237481] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:01.007 23:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.007 23:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.007 23:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:04.287 Nvme0n1 00:45:04.287 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.287 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:45:04.287 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:04.287 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:04.287 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:04.288 [2024-07-22 23:25:40.158029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:04.288 [ 00:45:04.288 { 00:45:04.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:04.288 "subtype": "Discovery", 00:45:04.288 "listen_addresses": [], 00:45:04.288 "allow_any_host": true, 00:45:04.288 "hosts": [] 00:45:04.288 }, 00:45:04.288 { 00:45:04.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:04.288 "subtype": "NVMe", 00:45:04.288 "listen_addresses": [ 00:45:04.288 { 00:45:04.288 "trtype": "TCP", 00:45:04.288 "adrfam": "IPv4", 00:45:04.288 "traddr": "10.0.0.2", 00:45:04.288 "trsvcid": "4420" 00:45:04.288 } 00:45:04.288 ], 00:45:04.288 "allow_any_host": true, 00:45:04.288 "hosts": [], 00:45:04.288 "serial_number": "SPDK00000000000001", 00:45:04.288 "model_number": "SPDK bdev Controller", 00:45:04.288 "max_namespaces": 1, 00:45:04.288 "min_cntlid": 1, 00:45:04.288 "max_cntlid": 65519, 00:45:04.288 "namespaces": [ 00:45:04.288 { 00:45:04.288 "nsid": 1, 00:45:04.288 "bdev_name": "Nvme0n1", 00:45:04.288 "name": "Nvme0n1", 00:45:04.288 "nguid": "27F22F1824944F46920C8BF2EAB7DC6E", 00:45:04.288 "uuid": "27f22f18-2494-4f46-920c-8bf2eab7dc6e" 00:45:04.288 } 00:45:04.288 ] 00:45:04.288 } 00:45:04.288 ] 00:45:04.288 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:45:04.288 EAL: No free 2048 kB hugepages reported on node 1 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:45:04.288 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:45:04.288 EAL: No free 2048 kB hugepages reported on node 1 00:45:04.546 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:45:04.546 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:45:04.546 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:45:04.546 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.546 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:45:04.546 23:25:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:04.546 rmmod nvme_tcp 00:45:04.546 rmmod nvme_fabrics 00:45:04.546 rmmod nvme_keyring 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1076711 ']' 00:45:04.546 23:25:40 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1076711 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1076711 ']' 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1076711 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1076711 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:04.546 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:04.547 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1076711' 00:45:04.547 killing process with pid 1076711 00:45:04.547 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1076711 00:45:04.547 23:25:40 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1076711 00:45:06.449 23:25:42 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:06.449 23:25:42 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:06.449 23:25:42 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:06.449 23:25:42 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:06.449 23:25:42 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:06.449 23:25:42 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:06.449 23:25:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:06.449 23:25:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:08.352 23:25:44 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:08.352 00:45:08.352 real 0m20.034s 00:45:08.352 user 0m28.566s 00:45:08.352 sys 0m3.723s 00:45:08.352 23:25:44 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:08.352 23:25:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:08.352 ************************************ 00:45:08.352 END TEST nvmf_identify_passthru 00:45:08.352 ************************************ 00:45:08.352 23:25:44 -- common/autotest_common.sh@1142 -- # return 0 00:45:08.352 23:25:44 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:08.353 23:25:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:08.353 23:25:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:08.353 23:25:44 -- common/autotest_common.sh@10 -- # set +x 00:45:08.353 ************************************ 00:45:08.353 START TEST nvmf_dif 00:45:08.353 ************************************ 00:45:08.353 23:25:44 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:08.353 * Looking for test storage... 00:45:08.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:08.353 23:25:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:08.353 23:25:44 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:08.353 23:25:44 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:08.353 23:25:44 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:08.353 23:25:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.353 23:25:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.353 23:25:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.353 23:25:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:45:08.353 23:25:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:08.353 23:25:44 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:08.611 23:25:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:45:08.611 23:25:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:45:08.611 23:25:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:45:08.611 23:25:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:45:08.611 23:25:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:08.611 23:25:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:08.611 23:25:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:08.611 23:25:44 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:45:08.611 23:25:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:11.925 23:25:47 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:45:11.926 Found 0000:84:00.0 (0x8086 - 0x159b) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:45:11.926 Found 0000:84:00.1 (0x8086 - 0x159b) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:45:11.926 Found net devices under 0000:84:00.0: cvl_0_0 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:45:11.926 Found net devices under 0000:84:00.1: cvl_0_1 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:11.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:11.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:45:11.926 00:45:11.926 --- 10.0.0.2 ping statistics --- 00:45:11.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:11.926 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:11.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:11.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:45:11.926 00:45:11.926 --- 10.0.0.1 ping statistics --- 00:45:11.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:11.926 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:45:11.926 23:25:47 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:13.304 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:45:13.304 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:45:13.304 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:45:13.304 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:45:13.304 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:45:13.304 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:45:13.304 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:45:13.304 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:45:13.304 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:45:13.304 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:45:13.304 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:45:13.304 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:45:13.304 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:45:13.304 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:45:13.304 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:45:13.304 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:45:13.304 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:13.563 23:25:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:45:13.563 23:25:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1080123 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:13.563 23:25:49 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1080123 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1080123 ']' 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:13.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:13.563 23:25:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:13.563 [2024-07-22 23:25:49.860282] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:45:13.563 [2024-07-22 23:25:49.860388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:13.822 EAL: No free 2048 kB hugepages reported on node 1 00:45:13.822 [2024-07-22 23:25:49.967178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:13.822 [2024-07-22 23:25:50.113111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:13.822 [2024-07-22 23:25:50.113230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:13.822 [2024-07-22 23:25:50.113266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:13.822 [2024-07-22 23:25:50.113296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:13.822 [2024-07-22 23:25:50.113354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:13.822 [2024-07-22 23:25:50.113403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:14.081 23:25:50 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:14.081 23:25:50 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:45:14.081 23:25:50 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:14.081 23:25:50 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:14.081 23:25:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:14.341 23:25:50 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:14.341 23:25:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:45:14.341 23:25:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:45:14.341 23:25:50 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.341 23:25:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:14.341 [2024-07-22 23:25:50.400839] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:14.341 23:25:50 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.341 23:25:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:45:14.341 23:25:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:14.341 23:25:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:14.341 23:25:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:14.341 ************************************ 00:45:14.341 START TEST fio_dif_1_default 00:45:14.341 ************************************ 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:14.341 bdev_null0 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:14.341 [2024-07-22 23:25:50.477655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:14.341 { 00:45:14.341 "params": { 00:45:14.341 "name": "Nvme$subsystem", 00:45:14.341 "trtype": "$TEST_TRANSPORT", 00:45:14.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:14.341 "adrfam": "ipv4", 00:45:14.341 "trsvcid": "$NVMF_PORT", 00:45:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:14.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:14.341 "hdgst": ${hdgst:-false}, 00:45:14.341 "ddgst": ${ddgst:-false} 00:45:14.341 }, 00:45:14.341 "method": "bdev_nvme_attach_controller" 00:45:14.341 } 00:45:14.341 EOF 00:45:14.341 )") 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:14.341 "params": { 00:45:14.341 "name": "Nvme0", 00:45:14.341 "trtype": "tcp", 00:45:14.341 "traddr": "10.0.0.2", 00:45:14.341 "adrfam": "ipv4", 00:45:14.341 "trsvcid": "4420", 00:45:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:14.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:14.341 "hdgst": false, 00:45:14.341 "ddgst": false 00:45:14.341 }, 00:45:14.341 "method": "bdev_nvme_attach_controller" 00:45:14.341 }' 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:14.341 23:25:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:14.600 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:14.600 fio-3.35 00:45:14.600 Starting 1 thread 00:45:14.600 EAL: No free 2048 kB hugepages reported on node 1 00:45:26.818 00:45:26.818 filename0: (groupid=0, jobs=1): err= 0: pid=1080351: Mon Jul 22 23:26:01 2024 00:45:26.818 read: IOPS=183, BW=734KiB/s (752kB/s)(7360KiB/10026msec) 00:45:26.818 slat (usec): min=6, max=107, avg= 9.96, stdev= 3.79 00:45:26.818 clat (usec): min=665, max=45266, avg=21763.00, stdev=20516.80 00:45:26.818 lat (usec): min=674, max=45304, avg=21772.96, stdev=20516.78 00:45:26.818 clat percentiles (usec): 00:45:26.818 | 1.00th=[ 889], 5.00th=[ 1012], 10.00th=[ 1045], 20.00th=[ 1074], 00:45:26.818 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[41157], 60.00th=[41681], 00:45:26.818 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42730], 95.00th=[42730], 00:45:26.818 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:45:26.818 | 99.99th=[45351] 00:45:26.818 bw ( KiB/s): min= 672, max= 768, per=99.99%, avg=734.40, stdev=36.67, samples=20 00:45:26.818 iops : min= 168, max= 192, avg=183.60, stdev= 9.17, samples=20 00:45:26.818 lat (usec) : 750=0.65%, 1000=4.02% 00:45:26.818 lat (msec) : 2=44.89%, 50=50.43% 00:45:26.818 cpu : usr=89.28%, sys=10.34%, ctx=17, majf=0, minf=309 00:45:26.818 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:26.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.818 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:26.818 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:26.818 00:45:26.818 Run status group 0 (all jobs): 00:45:26.818 READ: bw=734KiB/s (752kB/s), 734KiB/s-734KiB/s (752kB/s-752kB/s), io=7360KiB (7537kB), run=10026-10026msec 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.818 00:45:26.818 real 0m11.395s 00:45:26.818 user 0m10.451s 00:45:26.818 sys 0m1.456s 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 ************************************ 00:45:26.818 END TEST fio_dif_1_default 00:45:26.818 ************************************ 00:45:26.818 23:26:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:45:26.818 23:26:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:26.818 23:26:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:26.818 23:26:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 ************************************ 00:45:26.818 START TEST fio_dif_1_multi_subsystems 00:45:26.818 ************************************ 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 bdev_null0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.818 [2024-07-22 23:26:01.952067] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:26.818 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.819 bdev_null1 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:26.819 { 00:45:26.819 "params": { 00:45:26.819 "name": "Nvme$subsystem", 00:45:26.819 "trtype": "$TEST_TRANSPORT", 00:45:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:26.819 "adrfam": "ipv4", 00:45:26.819 "trsvcid": "$NVMF_PORT", 00:45:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:26.819 "hdgst": ${hdgst:-false}, 00:45:26.819 "ddgst": ${ddgst:-false} 00:45:26.819 }, 00:45:26.819 "method": "bdev_nvme_attach_controller" 00:45:26.819 } 00:45:26.819 EOF 00:45:26.819 )") 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:26.819 { 00:45:26.819 "params": { 00:45:26.819 "name": "Nvme$subsystem", 00:45:26.819 "trtype": "$TEST_TRANSPORT", 00:45:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:26.819 "adrfam": "ipv4", 00:45:26.819 "trsvcid": "$NVMF_PORT", 00:45:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:26.819 "hdgst": ${hdgst:-false}, 00:45:26.819 "ddgst": ${ddgst:-false} 00:45:26.819 }, 00:45:26.819 "method": "bdev_nvme_attach_controller" 00:45:26.819 } 00:45:26.819 EOF 00:45:26.819 )") 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:26.819 23:26:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:26.819 "params": { 00:45:26.819 "name": "Nvme0", 00:45:26.819 "trtype": "tcp", 00:45:26.819 "traddr": "10.0.0.2", 00:45:26.819 "adrfam": "ipv4", 00:45:26.819 "trsvcid": "4420", 00:45:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:26.819 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:26.819 "hdgst": false, 00:45:26.819 "ddgst": false 00:45:26.819 }, 00:45:26.819 "method": "bdev_nvme_attach_controller" 00:45:26.819 },{ 00:45:26.819 "params": { 00:45:26.819 "name": "Nvme1", 00:45:26.819 "trtype": "tcp", 00:45:26.819 "traddr": "10.0.0.2", 00:45:26.819 "adrfam": "ipv4", 00:45:26.819 "trsvcid": "4420", 00:45:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:26.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:26.819 "hdgst": false, 00:45:26.819 "ddgst": false 00:45:26.819 }, 00:45:26.819 "method": "bdev_nvme_attach_controller" 00:45:26.819 }' 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:26.819 23:26:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:26.819 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:26.819 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:26.819 fio-3.35 00:45:26.819 Starting 2 threads 00:45:26.819 EAL: No free 2048 kB hugepages reported on node 1 00:45:39.021 00:45:39.021 filename0: (groupid=0, jobs=1): err= 0: pid=1081751: Mon Jul 22 23:26:13 2024 00:45:39.021 read: IOPS=184, BW=739KiB/s (757kB/s)(7424KiB/10040msec) 00:45:39.022 slat (nsec): min=6345, max=55039, avg=17474.34, stdev=6107.00 00:45:39.022 clat (usec): min=800, max=44046, avg=21582.61, stdev=20478.50 00:45:39.022 lat (usec): min=811, max=44062, avg=21600.08, stdev=20477.95 00:45:39.022 clat percentiles (usec): 00:45:39.022 | 1.00th=[ 881], 5.00th=[ 938], 10.00th=[ 963], 20.00th=[ 1004], 00:45:39.022 | 30.00th=[ 1029], 40.00th=[ 1090], 50.00th=[41157], 60.00th=[41681], 00:45:39.022 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:45:39.022 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[44303], 00:45:39.022 | 99.99th=[44303] 00:45:39.022 bw ( KiB/s): min= 672, max= 768, per=49.98%, avg=740.80, stdev=34.86, samples=20 00:45:39.022 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:45:39.022 lat (usec) : 1000=19.99% 00:45:39.022 lat (msec) : 2=29.80%, 50=50.22% 00:45:39.022 cpu : usr=94.17%, sys=5.36%, ctx=14, majf=0, minf=158 00:45:39.022 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:39.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.022 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:39.022 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:39.022 filename1: (groupid=0, jobs=1): err= 0: pid=1081752: Mon Jul 22 23:26:13 2024 00:45:39.022 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10031msec) 00:45:39.022 slat (nsec): min=6108, max=55672, avg=17521.04, stdev=5534.43 00:45:39.022 clat (usec): min=882, max=42941, avg=21516.31, stdev=20499.03 00:45:39.022 lat (usec): min=897, max=42964, avg=21533.83, stdev=20498.54 00:45:39.022 clat percentiles (usec): 00:45:39.022 | 1.00th=[ 930], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1057], 00:45:39.022 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 3392], 60.00th=[41681], 00:45:39.022 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42730], 95.00th=[42730], 00:45:39.022 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:45:39.022 | 99.99th=[42730] 00:45:39.022 bw ( KiB/s): min= 704, max= 768, per=50.12%, avg=742.40, stdev=32.17, samples=20 00:45:39.022 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:45:39.022 lat (usec) : 1000=9.84% 00:45:39.022 lat (msec) : 2=40.05%, 4=0.22%, 50=49.89% 00:45:39.022 cpu : usr=93.75%, sys=5.78%, ctx=14, majf=0, minf=93 00:45:39.022 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:39.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.022 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:39.022 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:39.022 00:45:39.022 Run status group 0 (all jobs): 00:45:39.022 READ: bw=1480KiB/s (1516kB/s), 739KiB/s-742KiB/s (757kB/s-760kB/s), io=14.5MiB (15.2MB), run=10031-10040msec 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 00:45:39.022 real 0m11.632s 00:45:39.022 user 0m20.519s 00:45:39.022 sys 0m1.618s 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 ************************************ 00:45:39.022 END TEST fio_dif_1_multi_subsystems 00:45:39.022 ************************************ 00:45:39.022 23:26:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:45:39.022 23:26:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:39.022 23:26:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:39.022 23:26:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 ************************************ 00:45:39.022 START TEST fio_dif_rand_params 00:45:39.022 ************************************ 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 bdev_null0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:39.022 [2024-07-22 23:26:13.664530] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:39.022 { 00:45:39.022 "params": { 00:45:39.022 "name": "Nvme$subsystem", 00:45:39.022 "trtype": "$TEST_TRANSPORT", 00:45:39.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:39.022 "adrfam": "ipv4", 00:45:39.022 "trsvcid": "$NVMF_PORT", 00:45:39.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:39.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:39.022 "hdgst": ${hdgst:-false}, 00:45:39.022 "ddgst": ${ddgst:-false} 00:45:39.022 }, 00:45:39.022 "method": "bdev_nvme_attach_controller" 00:45:39.022 } 00:45:39.022 EOF 00:45:39.022 )") 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:39.022 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:39.023 "params": { 00:45:39.023 "name": "Nvme0", 00:45:39.023 "trtype": "tcp", 00:45:39.023 "traddr": "10.0.0.2", 00:45:39.023 "adrfam": "ipv4", 00:45:39.023 "trsvcid": "4420", 00:45:39.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:39.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:39.023 "hdgst": false, 00:45:39.023 "ddgst": false 00:45:39.023 }, 00:45:39.023 "method": "bdev_nvme_attach_controller" 00:45:39.023 }' 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:39.023 23:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:39.023 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:39.023 ... 00:45:39.023 fio-3.35 00:45:39.023 Starting 3 threads 00:45:39.023 EAL: No free 2048 kB hugepages reported on node 1 00:45:44.304 00:45:44.304 filename0: (groupid=0, jobs=1): err= 0: pid=1083137: Mon Jul 22 23:26:19 2024 00:45:44.304 read: IOPS=140, BW=17.6MiB/s (18.4MB/s)(88.0MiB/5008msec) 00:45:44.304 slat (nsec): min=13239, max=67519, avg=34259.47, stdev=5739.42 00:45:44.304 clat (usec): min=8061, max=51686, avg=21292.16, stdev=6225.61 00:45:44.304 lat (usec): min=8095, max=51754, avg=21326.42, stdev=6226.25 00:45:44.304 clat percentiles (usec): 00:45:44.304 | 1.00th=[ 9634], 5.00th=[12125], 10.00th=[13960], 20.00th=[15664], 00:45:44.304 | 30.00th=[17171], 40.00th=[19006], 50.00th=[20055], 60.00th=[23200], 00:45:44.304 | 70.00th=[25560], 80.00th=[26870], 90.00th=[28967], 95.00th=[31065], 00:45:44.304 | 99.00th=[33817], 99.50th=[38536], 99.90th=[51643], 99.95th=[51643], 00:45:44.304 | 99.99th=[51643] 00:45:44.304 bw ( KiB/s): min=13056, max=23808, per=33.80%, avg=17945.60, stdev=4019.64, samples=10 00:45:44.304 iops : min= 102, max= 186, avg=140.20, stdev=31.40, samples=10 00:45:44.304 lat (msec) : 10=1.42%, 20=47.59%, 50=50.57%, 100=0.43% 00:45:44.304 cpu : usr=92.07%, sys=6.73%, ctx=44, majf=0, minf=83 00:45:44.304 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.305 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.305 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:44.305 filename0: (groupid=0, jobs=1): err= 0: pid=1083138: Mon Jul 22 23:26:19 2024 00:45:44.305 read: IOPS=139, BW=17.5MiB/s (18.3MB/s)(88.1MiB/5049msec) 00:45:44.305 slat (usec): min=6, max=135, avg=33.51, stdev= 6.83 00:45:44.305 clat (usec): min=10103, max=63385, avg=21379.73, stdev=6337.10 00:45:44.305 lat (usec): min=10114, max=63418, avg=21413.24, stdev=6337.87 00:45:44.305 clat percentiles (usec): 00:45:44.305 | 1.00th=[10552], 5.00th=[13042], 10.00th=[13829], 20.00th=[15270], 00:45:44.305 | 30.00th=[16909], 40.00th=[18220], 50.00th=[20317], 60.00th=[23200], 00:45:44.305 | 70.00th=[25560], 80.00th=[26870], 90.00th=[30278], 95.00th=[31851], 00:45:44.305 | 99.00th=[33817], 99.50th=[34866], 99.90th=[63177], 99.95th=[63177], 00:45:44.305 | 99.99th=[63177] 00:45:44.305 bw ( KiB/s): min=13568, max=22573, per=33.86%, avg=17975.70, stdev=4004.20, samples=10 00:45:44.305 iops : min= 106, max= 176, avg=140.40, stdev=31.24, samples=10 00:45:44.305 lat (msec) : 20=48.51%, 50=51.35%, 100=0.14% 00:45:44.305 cpu : usr=91.80%, sys=7.03%, ctx=55, majf=0, minf=62 00:45:44.305 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.305 issued rwts: total=705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.305 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:44.305 filename0: (groupid=0, jobs=1): err= 0: pid=1083139: Mon Jul 22 23:26:19 2024 00:45:44.305 read: IOPS=136, BW=17.1MiB/s (17.9MB/s)(85.6MiB/5008msec) 00:45:44.305 slat (usec): min=5, max=165, avg=32.98, stdev= 7.70 00:45:44.305 clat (usec): min=11044, max=55280, avg=21883.70, stdev=6666.94 00:45:44.305 lat (usec): min=11076, max=55313, avg=21916.68, stdev=6666.90 00:45:44.305 clat percentiles (usec): 00:45:44.305 | 1.00th=[12518], 5.00th=[13829], 10.00th=[15008], 20.00th=[16712], 00:45:44.305 | 30.00th=[17433], 40.00th=[18744], 50.00th=[20317], 60.00th=[23462], 00:45:44.305 | 70.00th=[25297], 80.00th=[26870], 90.00th=[28181], 95.00th=[29754], 00:45:44.305 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:45:44.305 | 99.99th=[55313] 00:45:44.305 bw ( KiB/s): min=14080, max=22528, per=32.89%, avg=17459.20, stdev=2903.34, samples=10 00:45:44.305 iops : min= 110, max= 176, avg=136.40, stdev=22.68, samples=10 00:45:44.305 lat (msec) : 20=48.32%, 50=49.93%, 100=1.75% 00:45:44.305 cpu : usr=92.69%, sys=6.37%, ctx=7, majf=0, minf=71 00:45:44.305 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.305 issued rwts: total=685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.305 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:44.305 00:45:44.305 Run status group 0 (all jobs): 00:45:44.305 READ: bw=51.8MiB/s (54.4MB/s), 17.1MiB/s-17.6MiB/s (17.9MB/s-18.4MB/s), io=262MiB (274MB), run=5008-5049msec 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 bdev_null0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 [2024-07-22 23:26:20.293620] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 bdev_null1 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 bdev_null2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:44.305 { 00:45:44.305 "params": { 00:45:44.305 "name": "Nvme$subsystem", 00:45:44.305 "trtype": "$TEST_TRANSPORT", 00:45:44.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:44.305 "adrfam": "ipv4", 00:45:44.305 "trsvcid": "$NVMF_PORT", 00:45:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:44.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:44.305 "hdgst": ${hdgst:-false}, 00:45:44.305 "ddgst": ${ddgst:-false} 00:45:44.305 }, 00:45:44.305 "method": "bdev_nvme_attach_controller" 00:45:44.305 } 00:45:44.305 EOF 00:45:44.305 )") 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:44.305 { 00:45:44.305 "params": { 00:45:44.305 "name": "Nvme$subsystem", 00:45:44.305 "trtype": "$TEST_TRANSPORT", 00:45:44.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:44.305 "adrfam": "ipv4", 00:45:44.305 "trsvcid": "$NVMF_PORT", 00:45:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:44.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:44.305 "hdgst": ${hdgst:-false}, 00:45:44.305 "ddgst": ${ddgst:-false} 00:45:44.305 }, 00:45:44.305 "method": "bdev_nvme_attach_controller" 00:45:44.305 } 00:45:44.305 EOF 00:45:44.305 )") 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:44.305 { 00:45:44.305 "params": { 00:45:44.305 "name": "Nvme$subsystem", 00:45:44.305 "trtype": "$TEST_TRANSPORT", 00:45:44.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:44.305 "adrfam": "ipv4", 00:45:44.305 "trsvcid": "$NVMF_PORT", 00:45:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:44.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:44.305 "hdgst": ${hdgst:-false}, 00:45:44.305 "ddgst": ${ddgst:-false} 00:45:44.305 }, 00:45:44.305 "method": "bdev_nvme_attach_controller" 00:45:44.305 } 00:45:44.305 EOF 00:45:44.305 )") 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:44.305 "params": { 00:45:44.305 "name": "Nvme0", 00:45:44.305 "trtype": "tcp", 00:45:44.305 "traddr": "10.0.0.2", 00:45:44.305 "adrfam": "ipv4", 00:45:44.305 "trsvcid": "4420", 00:45:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:44.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:44.305 "hdgst": false, 00:45:44.305 "ddgst": false 00:45:44.305 }, 00:45:44.305 "method": "bdev_nvme_attach_controller" 00:45:44.305 },{ 00:45:44.305 "params": { 00:45:44.305 "name": "Nvme1", 00:45:44.305 "trtype": "tcp", 00:45:44.305 "traddr": "10.0.0.2", 00:45:44.305 "adrfam": "ipv4", 00:45:44.305 "trsvcid": "4420", 00:45:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:44.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:44.305 "hdgst": false, 00:45:44.305 "ddgst": false 00:45:44.305 }, 00:45:44.305 "method": "bdev_nvme_attach_controller" 00:45:44.305 },{ 00:45:44.305 "params": { 00:45:44.305 "name": "Nvme2", 00:45:44.305 "trtype": "tcp", 00:45:44.305 "traddr": "10.0.0.2", 00:45:44.305 "adrfam": "ipv4", 00:45:44.305 "trsvcid": "4420", 00:45:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:44.305 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:44.305 "hdgst": false, 00:45:44.305 "ddgst": false 00:45:44.305 }, 00:45:44.305 "method": "bdev_nvme_attach_controller" 00:45:44.305 }' 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:44.305 23:26:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:44.564 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:44.564 ... 00:45:44.564 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:44.564 ... 00:45:44.564 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:44.564 ... 00:45:44.564 fio-3.35 00:45:44.564 Starting 24 threads 00:45:44.564 EAL: No free 2048 kB hugepages reported on node 1 00:45:56.793 00:45:56.793 filename0: (groupid=0, jobs=1): err= 0: pid=1083886: Mon Jul 22 23:26:31 2024 00:45:56.793 read: IOPS=347, BW=1390KiB/s (1423kB/s)(13.6MiB/10037msec) 00:45:56.793 slat (usec): min=5, max=227, avg=73.34, stdev=35.28 00:45:56.793 clat (usec): min=15811, max=76127, avg=45392.55, stdev=5166.13 00:45:56.793 lat (usec): min=15817, max=76222, avg=45465.88, stdev=5171.63 00:45:56.793 clat percentiles (usec): 00:45:56.793 | 1.00th=[26608], 5.00th=[43254], 10.00th=[43779], 20.00th=[44303], 00:45:56.793 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:45:56.793 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[49021], 00:45:56.793 | 99.00th=[71828], 99.50th=[74974], 99.90th=[74974], 99.95th=[76022], 00:45:56.793 | 99.99th=[76022] 00:45:56.793 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1388.90, stdev=62.46, samples=20 00:45:56.793 iops : min= 320, max= 384, avg=347.20, stdev=15.66, samples=20 00:45:56.793 lat (msec) : 20=0.46%, 50=95.01%, 100=4.53% 00:45:56.793 cpu : usr=97.67%, sys=1.60%, ctx=43, majf=0, minf=45 00:45:56.793 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:56.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 issued rwts: total=3488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.793 filename0: (groupid=0, jobs=1): err= 0: pid=1083887: Mon Jul 22 23:26:31 2024 00:45:56.793 read: IOPS=343, BW=1374KiB/s (1407kB/s)(13.4MiB/10012msec) 00:45:56.793 slat (usec): min=9, max=213, avg=62.75, stdev=41.94 00:45:56.793 clat (msec): min=26, max=108, avg=46.00, stdev= 6.12 00:45:56.793 lat (msec): min=26, max=108, avg=46.06, stdev= 6.12 00:45:56.793 clat percentiles (msec): 00:45:56.793 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:45:56.793 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:45:56.793 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 50], 00:45:56.793 | 99.00th=[ 73], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 109], 00:45:56.793 | 99.99th=[ 109] 00:45:56.793 bw ( KiB/s): min= 1024, max= 1408, per=4.13%, avg=1369.60, stdev=93.78, samples=20 00:45:56.793 iops : min= 256, max= 352, avg=342.40, stdev=23.45, samples=20 00:45:56.793 lat (msec) : 50=95.29%, 100=4.19%, 250=0.52% 00:45:56.793 cpu : usr=96.53%, sys=2.01%, ctx=139, majf=0, minf=28 00:45:56.793 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.793 filename0: (groupid=0, jobs=1): err= 0: pid=1083888: Mon Jul 22 23:26:31 2024 00:45:56.793 read: IOPS=343, BW=1374KiB/s (1407kB/s)(13.4MiB/10011msec) 00:45:56.793 slat (usec): min=8, max=115, avg=46.25, stdev=16.80 00:45:56.793 clat (msec): min=24, max=108, avg=46.14, stdev= 6.19 00:45:56.793 lat (msec): min=24, max=108, avg=46.19, stdev= 6.19 00:45:56.793 clat percentiles (msec): 00:45:56.793 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.793 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.793 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 50], 00:45:56.793 | 99.00th=[ 74], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 109], 00:45:56.793 | 99.99th=[ 109] 00:45:56.793 bw ( KiB/s): min= 1024, max= 1408, per=4.13%, avg=1369.60, stdev=93.78, samples=20 00:45:56.793 iops : min= 256, max= 352, avg=342.40, stdev=23.45, samples=20 00:45:56.793 lat (msec) : 50=95.23%, 100=4.24%, 250=0.52% 00:45:56.793 cpu : usr=98.12%, sys=1.36%, ctx=20, majf=0, minf=24 00:45:56.793 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.793 filename0: (groupid=0, jobs=1): err= 0: pid=1083889: Mon Jul 22 23:26:31 2024 00:45:56.793 read: IOPS=346, BW=1385KiB/s (1418kB/s)(13.6MiB/10030msec) 00:45:56.793 slat (usec): min=8, max=104, avg=28.67, stdev=16.70 00:45:56.793 clat (usec): min=22223, max=73994, avg=45977.02, stdev=4723.24 00:45:56.793 lat (usec): min=22234, max=74037, avg=46005.69, stdev=4724.27 00:45:56.793 clat percentiles (usec): 00:45:56.793 | 1.00th=[42730], 5.00th=[44303], 10.00th=[44303], 20.00th=[44827], 00:45:56.793 | 30.00th=[44827], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:45:56.793 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[51119], 00:45:56.793 | 99.00th=[67634], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:45:56.793 | 99.99th=[73925] 00:45:56.793 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1382.40, stdev=89.07, samples=20 00:45:56.793 iops : min= 288, max= 384, avg=345.60, stdev=22.27, samples=20 00:45:56.793 lat (msec) : 50=94.73%, 100=5.27% 00:45:56.793 cpu : usr=97.71%, sys=1.82%, ctx=19, majf=0, minf=40 00:45:56.793 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.793 filename0: (groupid=0, jobs=1): err= 0: pid=1083890: Mon Jul 22 23:26:31 2024 00:45:56.793 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10002msec) 00:45:56.793 slat (usec): min=11, max=138, avg=46.64, stdev=16.94 00:45:56.793 clat (usec): min=35311, max=92146, avg=46114.15, stdev=5221.28 00:45:56.793 lat (usec): min=35341, max=92192, avg=46160.80, stdev=5221.73 00:45:56.793 clat percentiles (usec): 00:45:56.793 | 1.00th=[43779], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:45:56.793 | 30.00th=[44827], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:45:56.793 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[54264], 00:45:56.793 | 99.00th=[71828], 99.50th=[88605], 99.90th=[88605], 99.95th=[91751], 00:45:56.793 | 99.99th=[91751] 00:45:56.793 bw ( KiB/s): min= 1024, max= 1408, per=4.14%, avg=1374.32, stdev=93.89, samples=19 00:45:56.793 iops : min= 256, max= 352, avg=343.58, stdev=23.47, samples=19 00:45:56.793 lat (msec) : 50=94.48%, 100=5.52% 00:45:56.793 cpu : usr=97.68%, sys=1.72%, ctx=25, majf=0, minf=19 00:45:56.793 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.793 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.793 filename0: (groupid=0, jobs=1): err= 0: pid=1083891: Mon Jul 22 23:26:31 2024 00:45:56.793 read: IOPS=343, BW=1375KiB/s (1408kB/s)(13.4MiB/10009msec) 00:45:56.793 slat (nsec): min=9420, max=97328, avg=32168.32, stdev=12517.52 00:45:56.793 clat (msec): min=32, max=131, avg=46.26, stdev= 7.01 00:45:56.793 lat (msec): min=32, max=131, avg=46.29, stdev= 7.01 00:45:56.793 clat percentiles (msec): 00:45:56.793 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.793 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.793 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 52], 00:45:56.793 | 99.00th=[ 70], 99.50th=[ 73], 99.90th=[ 132], 99.95th=[ 132], 00:45:56.793 | 99.99th=[ 132] 00:45:56.793 bw ( KiB/s): min= 1024, max= 1408, per=4.12%, avg=1367.58, stdev=95.91, samples=19 00:45:56.793 iops : min= 256, max= 352, avg=341.89, stdev=23.98, samples=19 00:45:56.793 lat (msec) : 50=93.95%, 100=5.58%, 250=0.47% 00:45:56.793 cpu : usr=97.52%, sys=1.77%, ctx=55, majf=0, minf=27 00:45:56.794 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename0: (groupid=0, jobs=1): err= 0: pid=1083892: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1375KiB/s (1408kB/s)(13.4MiB/10005msec) 00:45:56.794 slat (usec): min=10, max=146, avg=46.43, stdev=16.78 00:45:56.794 clat (msec): min=21, max=141, avg=46.11, stdev= 7.78 00:45:56.794 lat (msec): min=21, max=141, avg=46.15, stdev= 7.78 00:45:56.794 clat percentiles (msec): 00:45:56.794 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.794 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.794 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 51], 00:45:56.794 | 99.00th=[ 72], 99.50th=[ 93], 99.90th=[ 142], 99.95th=[ 142], 00:45:56.794 | 99.99th=[ 142] 00:45:56.794 bw ( KiB/s): min= 1024, max= 1408, per=4.12%, avg=1367.58, stdev=104.97, samples=19 00:45:56.794 iops : min= 256, max= 352, avg=341.89, stdev=26.24, samples=19 00:45:56.794 lat (msec) : 50=94.94%, 100=4.59%, 250=0.47% 00:45:56.794 cpu : usr=97.62%, sys=1.60%, ctx=56, majf=0, minf=25 00:45:56.794 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename0: (groupid=0, jobs=1): err= 0: pid=1083893: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1375KiB/s (1408kB/s)(13.4MiB/10006msec) 00:45:56.794 slat (usec): min=8, max=124, avg=44.00, stdev=21.33 00:45:56.794 clat (msec): min=30, max=128, avg=46.12, stdev= 6.89 00:45:56.794 lat (msec): min=30, max=129, avg=46.17, stdev= 6.89 00:45:56.794 clat percentiles (msec): 00:45:56.794 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.794 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.794 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 52], 00:45:56.794 | 99.00th=[ 70], 99.50th=[ 73], 99.90th=[ 129], 99.95th=[ 129], 00:45:56.794 | 99.99th=[ 129] 00:45:56.794 bw ( KiB/s): min= 1024, max= 1408, per=4.12%, avg=1367.58, stdev=95.91, samples=19 00:45:56.794 iops : min= 256, max= 352, avg=341.89, stdev=23.98, samples=19 00:45:56.794 lat (msec) : 50=93.95%, 100=5.58%, 250=0.47% 00:45:56.794 cpu : usr=96.82%, sys=2.02%, ctx=142, majf=0, minf=18 00:45:56.794 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename1: (groupid=0, jobs=1): err= 0: pid=1083894: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10001msec) 00:45:56.794 slat (usec): min=8, max=144, avg=46.17, stdev=16.50 00:45:56.794 clat (msec): min=43, max=106, avg=46.11, stdev= 5.18 00:45:56.794 lat (msec): min=43, max=106, avg=46.16, stdev= 5.18 00:45:56.794 clat percentiles (msec): 00:45:56.794 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.794 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.794 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 56], 00:45:56.794 | 99.00th=[ 72], 99.50th=[ 73], 99.90th=[ 89], 99.95th=[ 107], 00:45:56.794 | 99.99th=[ 107] 00:45:56.794 bw ( KiB/s): min= 1024, max= 1408, per=4.14%, avg=1374.32, stdev=93.89, samples=19 00:45:56.794 iops : min= 256, max= 352, avg=343.58, stdev=23.47, samples=19 00:45:56.794 lat (msec) : 50=94.42%, 100=5.52%, 250=0.06% 00:45:56.794 cpu : usr=96.94%, sys=1.92%, ctx=78, majf=0, minf=22 00:45:56.794 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename1: (groupid=0, jobs=1): err= 0: pid=1083896: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10001msec) 00:45:56.794 slat (usec): min=13, max=155, avg=47.25, stdev=17.55 00:45:56.794 clat (msec): min=33, max=106, avg=46.08, stdev= 5.19 00:45:56.794 lat (msec): min=33, max=106, avg=46.13, stdev= 5.19 00:45:56.794 clat percentiles (msec): 00:45:56.794 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.794 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.794 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 56], 00:45:56.794 | 99.00th=[ 72], 99.50th=[ 73], 99.90th=[ 89], 99.95th=[ 106], 00:45:56.794 | 99.99th=[ 107] 00:45:56.794 bw ( KiB/s): min= 1024, max= 1408, per=4.14%, avg=1374.32, stdev=93.89, samples=19 00:45:56.794 iops : min= 256, max= 352, avg=343.58, stdev=23.47, samples=19 00:45:56.794 lat (msec) : 50=94.42%, 100=5.52%, 250=0.06% 00:45:56.794 cpu : usr=95.51%, sys=2.48%, ctx=526, majf=0, minf=21 00:45:56.794 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename1: (groupid=0, jobs=1): err= 0: pid=1083897: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1375KiB/s (1408kB/s)(13.4MiB/10004msec) 00:45:56.794 slat (usec): min=11, max=178, avg=76.80, stdev=28.15 00:45:56.794 clat (msec): min=21, max=139, avg=45.83, stdev= 7.70 00:45:56.794 lat (msec): min=21, max=139, avg=45.91, stdev= 7.70 00:45:56.794 clat percentiles (msec): 00:45:56.794 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:45:56.794 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:45:56.794 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 51], 00:45:56.794 | 99.00th=[ 72], 99.50th=[ 79], 99.90th=[ 140], 99.95th=[ 140], 00:45:56.794 | 99.99th=[ 140] 00:45:56.794 bw ( KiB/s): min= 1024, max= 1408, per=4.12%, avg=1367.58, stdev=104.97, samples=19 00:45:56.794 iops : min= 256, max= 352, avg=341.89, stdev=26.24, samples=19 00:45:56.794 lat (msec) : 50=94.94%, 100=4.59%, 250=0.47% 00:45:56.794 cpu : usr=97.76%, sys=1.48%, ctx=41, majf=0, minf=27 00:45:56.794 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename1: (groupid=0, jobs=1): err= 0: pid=1083898: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1375KiB/s (1408kB/s)(13.4MiB/10009msec) 00:45:56.794 slat (usec): min=8, max=101, avg=33.22, stdev=14.35 00:45:56.794 clat (msec): min=32, max=131, avg=46.24, stdev= 7.01 00:45:56.794 lat (msec): min=32, max=131, avg=46.28, stdev= 7.01 00:45:56.794 clat percentiles (msec): 00:45:56.794 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.794 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.794 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 52], 00:45:56.794 | 99.00th=[ 70], 99.50th=[ 73], 99.90th=[ 132], 99.95th=[ 132], 00:45:56.794 | 99.99th=[ 132] 00:45:56.794 bw ( KiB/s): min= 1024, max= 1408, per=4.12%, avg=1367.58, stdev=95.91, samples=19 00:45:56.794 iops : min= 256, max= 352, avg=341.89, stdev=23.98, samples=19 00:45:56.794 lat (msec) : 50=93.95%, 100=5.58%, 250=0.47% 00:45:56.794 cpu : usr=96.79%, sys=1.98%, ctx=72, majf=0, minf=15 00:45:56.794 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename1: (groupid=0, jobs=1): err= 0: pid=1083899: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10001msec) 00:45:56.794 slat (usec): min=11, max=122, avg=32.18, stdev=13.68 00:45:56.794 clat (usec): min=34188, max=91138, avg=46251.13, stdev=5239.95 00:45:56.794 lat (usec): min=34203, max=91194, avg=46283.31, stdev=5240.49 00:45:56.794 clat percentiles (usec): 00:45:56.794 | 1.00th=[43779], 5.00th=[44303], 10.00th=[44303], 20.00th=[44827], 00:45:56.794 | 30.00th=[44827], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:45:56.794 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[55313], 00:45:56.794 | 99.00th=[71828], 99.50th=[88605], 99.90th=[88605], 99.95th=[90702], 00:45:56.794 | 99.99th=[90702] 00:45:56.794 bw ( KiB/s): min= 1024, max= 1408, per=4.14%, avg=1374.32, stdev=93.89, samples=19 00:45:56.794 iops : min= 256, max= 352, avg=343.58, stdev=23.47, samples=19 00:45:56.794 lat (msec) : 50=94.42%, 100=5.58% 00:45:56.794 cpu : usr=97.80%, sys=1.69%, ctx=19, majf=0, minf=24 00:45:56.794 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:56.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.794 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.794 filename1: (groupid=0, jobs=1): err= 0: pid=1083900: Mon Jul 22 23:26:31 2024 00:45:56.794 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10002msec) 00:45:56.794 slat (usec): min=8, max=239, avg=57.06, stdev=34.12 00:45:56.794 clat (msec): min=21, max=138, avg=46.00, stdev= 7.63 00:45:56.794 lat (msec): min=21, max=138, avg=46.06, stdev= 7.63 00:45:56.794 clat percentiles (msec): 00:45:56.794 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:45:56.795 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.795 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 51], 00:45:56.795 | 99.00th=[ 72], 99.50th=[ 92], 99.90th=[ 138], 99.95th=[ 138], 00:45:56.795 | 99.99th=[ 138] 00:45:56.795 bw ( KiB/s): min= 1026, max= 1408, per=4.12%, avg=1367.68, stdev=104.61, samples=19 00:45:56.795 iops : min= 256, max= 352, avg=341.89, stdev=26.24, samples=19 00:45:56.795 lat (msec) : 50=95.00%, 100=4.53%, 250=0.47% 00:45:56.795 cpu : usr=96.80%, sys=1.99%, ctx=105, majf=0, minf=19 00:45:56.795 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.795 filename1: (groupid=0, jobs=1): err= 0: pid=1083901: Mon Jul 22 23:26:31 2024 00:45:56.795 read: IOPS=345, BW=1381KiB/s (1415kB/s)(13.5MiB/10007msec) 00:45:56.795 slat (usec): min=8, max=217, avg=48.86, stdev=49.98 00:45:56.795 clat (usec): min=31149, max=72557, avg=45891.39, stdev=4257.70 00:45:56.795 lat (usec): min=31162, max=72585, avg=45940.25, stdev=4260.18 00:45:56.795 clat percentiles (usec): 00:45:56.795 | 1.00th=[42730], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:45:56.795 | 30.00th=[44827], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:45:56.795 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[52167], 00:45:56.795 | 99.00th=[67634], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:45:56.795 | 99.99th=[72877] 00:45:56.795 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1381.05, stdev=80.72, samples=19 00:45:56.795 iops : min= 288, max= 384, avg=345.26, stdev=20.18, samples=19 00:45:56.795 lat (msec) : 50=93.40%, 100=6.60% 00:45:56.795 cpu : usr=97.51%, sys=1.73%, ctx=80, majf=0, minf=25 00:45:56.795 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 issued rwts: total=3456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.795 filename1: (groupid=0, jobs=1): err= 0: pid=1083902: Mon Jul 22 23:26:31 2024 00:45:56.795 read: IOPS=376, BW=1506KiB/s (1542kB/s)(14.7MiB/10005msec) 00:45:56.795 slat (usec): min=11, max=100, avg=28.95, stdev=21.87 00:45:56.795 clat (msec): min=11, max=156, avg=42.34, stdev=11.49 00:45:56.795 lat (msec): min=11, max=156, avg=42.37, stdev=11.49 00:45:56.795 clat percentiles (msec): 00:45:56.795 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 34], 00:45:56.795 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 45], 60.00th=[ 45], 00:45:56.795 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 52], 95.00th=[ 57], 00:45:56.795 | 99.00th=[ 71], 99.50th=[ 78], 99.90th=[ 157], 99.95th=[ 157], 00:45:56.795 | 99.99th=[ 157] 00:45:56.795 bw ( KiB/s): min= 894, max= 1728, per=4.49%, avg=1487.89, stdev=180.44, samples=19 00:45:56.795 iops : min= 223, max= 432, avg=371.95, stdev=45.20, samples=19 00:45:56.795 lat (msec) : 20=0.53%, 50=86.40%, 100=12.59%, 250=0.48% 00:45:56.795 cpu : usr=97.69%, sys=1.65%, ctx=48, majf=0, minf=36 00:45:56.795 IO depths : 1=0.1%, 2=1.3%, 4=7.7%, 8=76.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:45:56.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 complete : 0=0.0%, 4=89.8%, 8=6.7%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 issued rwts: total=3766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.795 filename2: (groupid=0, jobs=1): err= 0: pid=1083903: Mon Jul 22 23:26:31 2024 00:45:56.795 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10001msec) 00:45:56.795 slat (nsec): min=11616, max=96872, avg=26795.80, stdev=13898.11 00:45:56.795 clat (msec): min=32, max=106, avg=46.30, stdev= 5.19 00:45:56.795 lat (msec): min=32, max=106, avg=46.33, stdev= 5.19 00:45:56.795 clat percentiles (msec): 00:45:56.795 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.795 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.795 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 56], 00:45:56.795 | 99.00th=[ 72], 99.50th=[ 73], 99.90th=[ 89], 99.95th=[ 106], 00:45:56.795 | 99.99th=[ 107] 00:45:56.795 bw ( KiB/s): min= 1024, max= 1408, per=4.14%, avg=1374.32, stdev=93.89, samples=19 00:45:56.795 iops : min= 256, max= 352, avg=343.58, stdev=23.47, samples=19 00:45:56.795 lat (msec) : 50=94.22%, 100=5.73%, 250=0.06% 00:45:56.795 cpu : usr=97.96%, sys=1.54%, ctx=23, majf=0, minf=26 00:45:56.795 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:56.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.795 filename2: (groupid=0, jobs=1): err= 0: pid=1083904: Mon Jul 22 23:26:31 2024 00:45:56.795 read: IOPS=349, BW=1396KiB/s (1430kB/s)(13.7MiB/10038msec) 00:45:56.795 slat (usec): min=6, max=117, avg=27.98, stdev=18.54 00:45:56.795 clat (usec): min=16072, max=88794, avg=45588.58, stdev=5025.01 00:45:56.795 lat (usec): min=16079, max=88836, avg=45616.56, stdev=5026.01 00:45:56.795 clat percentiles (usec): 00:45:56.795 | 1.00th=[31851], 5.00th=[44303], 10.00th=[44303], 20.00th=[44827], 00:45:56.795 | 30.00th=[44827], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:45:56.795 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[51643], 00:45:56.795 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[88605], 00:45:56.795 | 99.99th=[88605] 00:45:56.795 bw ( KiB/s): min= 1280, max= 1408, per=4.21%, avg=1395.20, stdev=39.40, samples=20 00:45:56.795 iops : min= 320, max= 352, avg=348.80, stdev= 9.85, samples=20 00:45:56.795 lat (msec) : 20=0.46%, 50=93.95%, 100=5.59% 00:45:56.795 cpu : usr=96.13%, sys=2.27%, ctx=163, majf=0, minf=25 00:45:56.795 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 issued rwts: total=3504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.795 filename2: (groupid=0, jobs=1): err= 0: pid=1083905: Mon Jul 22 23:26:31 2024 00:45:56.795 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10003msec) 00:45:56.795 slat (usec): min=14, max=129, avg=46.12, stdev=15.71 00:45:56.795 clat (msec): min=21, max=139, avg=46.10, stdev= 7.67 00:45:56.795 lat (msec): min=21, max=139, avg=46.15, stdev= 7.67 00:45:56.795 clat percentiles (msec): 00:45:56.795 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.795 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.795 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 51], 00:45:56.795 | 99.00th=[ 72], 99.50th=[ 93], 99.90th=[ 140], 99.95th=[ 140], 00:45:56.795 | 99.99th=[ 140] 00:45:56.795 bw ( KiB/s): min= 1024, max= 1408, per=4.12%, avg=1367.58, stdev=104.97, samples=19 00:45:56.795 iops : min= 256, max= 352, avg=341.89, stdev=26.24, samples=19 00:45:56.795 lat (msec) : 50=94.94%, 100=4.59%, 250=0.47% 00:45:56.795 cpu : usr=97.82%, sys=1.58%, ctx=45, majf=0, minf=21 00:45:56.795 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.795 filename2: (groupid=0, jobs=1): err= 0: pid=1083906: Mon Jul 22 23:26:31 2024 00:45:56.795 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10001msec) 00:45:56.795 slat (usec): min=13, max=163, avg=47.53, stdev=18.41 00:45:56.795 clat (usec): min=36868, max=90284, avg=46084.10, stdev=5198.99 00:45:56.795 lat (usec): min=37022, max=90329, avg=46131.63, stdev=5199.27 00:45:56.795 clat percentiles (usec): 00:45:56.795 | 1.00th=[43779], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:45:56.795 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:45:56.795 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[54264], 00:45:56.795 | 99.00th=[71828], 99.50th=[88605], 99.90th=[89654], 99.95th=[90702], 00:45:56.795 | 99.99th=[90702] 00:45:56.795 bw ( KiB/s): min= 1024, max= 1408, per=4.14%, avg=1374.32, stdev=93.89, samples=19 00:45:56.795 iops : min= 256, max= 352, avg=343.58, stdev=23.47, samples=19 00:45:56.795 lat (msec) : 50=94.48%, 100=5.52% 00:45:56.795 cpu : usr=96.60%, sys=2.10%, ctx=194, majf=0, minf=18 00:45:56.795 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.795 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.795 filename2: (groupid=0, jobs=1): err= 0: pid=1083907: Mon Jul 22 23:26:31 2024 00:45:56.795 read: IOPS=346, BW=1384KiB/s (1418kB/s)(13.6MiB/10032msec) 00:45:56.795 slat (usec): min=11, max=228, avg=52.91, stdev=37.42 00:45:56.795 clat (msec): min=14, max=108, avg=45.77, stdev= 4.86 00:45:56.795 lat (msec): min=14, max=108, avg=45.83, stdev= 4.87 00:45:56.795 clat percentiles (msec): 00:45:56.795 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.796 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.796 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 51], 00:45:56.796 | 99.00th=[ 72], 99.50th=[ 74], 99.90th=[ 95], 99.95th=[ 109], 00:45:56.796 | 99.99th=[ 109] 00:45:56.796 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1382.40, stdev=89.22, samples=20 00:45:56.796 iops : min= 288, max= 384, avg=345.60, stdev=22.31, samples=20 00:45:56.796 lat (msec) : 20=0.06%, 50=94.93%, 100=4.95%, 250=0.06% 00:45:56.796 cpu : usr=96.70%, sys=2.25%, ctx=74, majf=0, minf=28 00:45:56.796 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.796 filename2: (groupid=0, jobs=1): err= 0: pid=1083908: Mon Jul 22 23:26:31 2024 00:45:56.796 read: IOPS=355, BW=1420KiB/s (1454kB/s)(13.9MiB/10003msec) 00:45:56.796 slat (usec): min=9, max=226, avg=44.49, stdev=23.42 00:45:56.796 clat (msec): min=17, max=139, avg=44.68, stdev= 9.47 00:45:56.796 lat (msec): min=17, max=139, avg=44.72, stdev= 9.47 00:45:56.796 clat percentiles (msec): 00:45:56.796 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 45], 00:45:56.796 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:45:56.796 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 56], 00:45:56.796 | 99.00th=[ 73], 99.50th=[ 108], 99.90th=[ 140], 99.95th=[ 140], 00:45:56.796 | 99.99th=[ 140] 00:45:56.796 bw ( KiB/s): min= 1024, max= 1728, per=4.27%, avg=1414.74, stdev=151.82, samples=19 00:45:56.796 iops : min= 256, max= 432, avg=353.68, stdev=37.95, samples=19 00:45:56.796 lat (msec) : 20=0.11%, 50=93.75%, 100=5.63%, 250=0.51% 00:45:56.796 cpu : usr=97.66%, sys=1.68%, ctx=34, majf=0, minf=29 00:45:56.796 IO depths : 1=5.0%, 2=10.2%, 4=21.7%, 8=55.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:45:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.796 filename2: (groupid=0, jobs=1): err= 0: pid=1083909: Mon Jul 22 23:26:31 2024 00:45:56.796 read: IOPS=343, BW=1375KiB/s (1408kB/s)(13.4MiB/10005msec) 00:45:56.796 slat (usec): min=8, max=194, avg=47.59, stdev=31.37 00:45:56.796 clat (msec): min=32, max=128, avg=46.11, stdev= 6.87 00:45:56.796 lat (msec): min=32, max=128, avg=46.16, stdev= 6.88 00:45:56.796 clat percentiles (msec): 00:45:56.796 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:45:56.796 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:45:56.796 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 52], 00:45:56.796 | 99.00th=[ 70], 99.50th=[ 88], 99.90th=[ 129], 99.95th=[ 129], 00:45:56.796 | 99.99th=[ 129] 00:45:56.796 bw ( KiB/s): min= 1026, max= 1408, per=4.12%, avg=1367.68, stdev=95.66, samples=19 00:45:56.796 iops : min= 256, max= 352, avg=341.89, stdev=24.01, samples=19 00:45:56.796 lat (msec) : 50=94.07%, 100=5.47%, 250=0.47% 00:45:56.796 cpu : usr=96.65%, sys=2.03%, ctx=93, majf=0, minf=15 00:45:56.796 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.796 filename2: (groupid=0, jobs=1): err= 0: pid=1083910: Mon Jul 22 23:26:31 2024 00:45:56.796 read: IOPS=343, BW=1376KiB/s (1409kB/s)(13.4MiB/10002msec) 00:45:56.796 slat (usec): min=10, max=135, avg=45.80, stdev=15.75 00:45:56.796 clat (usec): min=33830, max=88873, avg=46109.95, stdev=5133.07 00:45:56.796 lat (usec): min=33841, max=88903, avg=46155.75, stdev=5133.65 00:45:56.796 clat percentiles (usec): 00:45:56.796 | 1.00th=[43779], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:45:56.796 | 30.00th=[44827], 40.00th=[44827], 50.00th=[44827], 60.00th=[45351], 00:45:56.796 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[55313], 00:45:56.796 | 99.00th=[71828], 99.50th=[72877], 99.90th=[88605], 99.95th=[88605], 00:45:56.796 | 99.99th=[88605] 00:45:56.796 bw ( KiB/s): min= 1024, max= 1408, per=4.14%, avg=1374.32, stdev=93.89, samples=19 00:45:56.796 iops : min= 256, max= 352, avg=343.58, stdev=23.47, samples=19 00:45:56.796 lat (msec) : 50=94.30%, 100=5.70% 00:45:56.796 cpu : usr=96.06%, sys=2.46%, ctx=171, majf=0, minf=21 00:45:56.796 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:56.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:56.796 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:56.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:56.796 00:45:56.796 Run status group 0 (all jobs): 00:45:56.796 READ: bw=32.4MiB/s (33.9MB/s), 1374KiB/s-1506KiB/s (1407kB/s-1542kB/s), io=325MiB (341MB), run=10001-10038msec 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 bdev_null0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:56.796 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.797 [2024-07-22 23:26:32.382004] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.797 bdev_null1 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:56.797 { 00:45:56.797 "params": { 00:45:56.797 "name": "Nvme$subsystem", 00:45:56.797 "trtype": "$TEST_TRANSPORT", 00:45:56.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:56.797 "adrfam": "ipv4", 00:45:56.797 "trsvcid": "$NVMF_PORT", 00:45:56.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:56.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:56.797 "hdgst": ${hdgst:-false}, 00:45:56.797 "ddgst": ${ddgst:-false} 00:45:56.797 }, 00:45:56.797 "method": "bdev_nvme_attach_controller" 00:45:56.797 } 00:45:56.797 EOF 00:45:56.797 )") 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:56.797 { 00:45:56.797 "params": { 00:45:56.797 "name": "Nvme$subsystem", 00:45:56.797 "trtype": "$TEST_TRANSPORT", 00:45:56.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:56.797 "adrfam": "ipv4", 00:45:56.797 "trsvcid": "$NVMF_PORT", 00:45:56.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:56.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:56.797 "hdgst": ${hdgst:-false}, 00:45:56.797 "ddgst": ${ddgst:-false} 00:45:56.797 }, 00:45:56.797 "method": "bdev_nvme_attach_controller" 00:45:56.797 } 00:45:56.797 EOF 00:45:56.797 )") 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:56.797 "params": { 00:45:56.797 "name": "Nvme0", 00:45:56.797 "trtype": "tcp", 00:45:56.797 "traddr": "10.0.0.2", 00:45:56.797 "adrfam": "ipv4", 00:45:56.797 "trsvcid": "4420", 00:45:56.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:56.797 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:56.797 "hdgst": false, 00:45:56.797 "ddgst": false 00:45:56.797 }, 00:45:56.797 "method": "bdev_nvme_attach_controller" 00:45:56.797 },{ 00:45:56.797 "params": { 00:45:56.797 "name": "Nvme1", 00:45:56.797 "trtype": "tcp", 00:45:56.797 "traddr": "10.0.0.2", 00:45:56.797 "adrfam": "ipv4", 00:45:56.797 "trsvcid": "4420", 00:45:56.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:56.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:56.797 "hdgst": false, 00:45:56.797 "ddgst": false 00:45:56.797 }, 00:45:56.797 "method": "bdev_nvme_attach_controller" 00:45:56.797 }' 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:56.797 23:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:56.797 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:56.797 ... 00:45:56.797 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:56.797 ... 00:45:56.797 fio-3.35 00:45:56.797 Starting 4 threads 00:45:56.797 EAL: No free 2048 kB hugepages reported on node 1 00:46:03.368 00:46:03.368 filename0: (groupid=0, jobs=1): err= 0: pid=1085304: Mon Jul 22 23:26:38 2024 00:46:03.368 read: IOPS=874, BW=6997KiB/s (7165kB/s)(34.2MiB/5003msec) 00:46:03.368 slat (usec): min=5, max=122, avg=36.58, stdev=17.59 00:46:03.368 clat (usec): min=1480, max=16751, avg=8993.52, stdev=1638.58 00:46:03.368 lat (usec): min=1515, max=16765, avg=9030.10, stdev=1638.23 00:46:03.368 clat percentiles (usec): 00:46:03.368 | 1.00th=[ 2442], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[ 8586], 00:46:03.368 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8979], 00:46:03.368 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[11731], 00:46:03.368 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16319], 99.95th=[16319], 00:46:03.368 | 99.99th=[16712] 00:46:03.368 bw ( KiB/s): min= 6896, max= 7216, per=24.70%, avg=7009.78, stdev=111.84, samples=9 00:46:03.368 iops : min= 862, max= 902, avg=876.22, stdev=13.98, samples=9 00:46:03.368 lat (msec) : 2=0.25%, 4=1.81%, 10=90.29%, 20=7.66% 00:46:03.368 cpu : usr=94.14%, sys=5.00%, ctx=6, majf=0, minf=114 00:46:03.368 IO depths : 1=1.3%, 2=21.0%, 4=53.4%, 8=24.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 issued rwts: total=4376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.368 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:03.368 filename0: (groupid=0, jobs=1): err= 0: pid=1085305: Mon Jul 22 23:26:38 2024 00:46:03.368 read: IOPS=893, BW=7150KiB/s (7321kB/s)(34.9MiB/5005msec) 00:46:03.368 slat (usec): min=9, max=123, avg=34.28, stdev=16.77 00:46:03.368 clat (usec): min=1946, max=16228, avg=8818.62, stdev=986.48 00:46:03.368 lat (usec): min=1979, max=16269, avg=8852.90, stdev=987.87 00:46:03.368 clat percentiles (usec): 00:46:03.368 | 1.00th=[ 4686], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8586], 00:46:03.368 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:46:03.368 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:46:03.368 | 99.00th=[12911], 99.50th=[14353], 99.90th=[15008], 99.95th=[15401], 00:46:03.368 | 99.99th=[16188] 00:46:03.368 bw ( KiB/s): min= 6912, max= 7424, per=25.18%, avg=7145.40, stdev=151.77, samples=10 00:46:03.368 iops : min= 864, max= 928, avg=893.10, stdev=18.88, samples=10 00:46:03.368 lat (msec) : 2=0.02%, 4=0.65%, 10=96.51%, 20=2.82% 00:46:03.368 cpu : usr=94.36%, sys=4.78%, ctx=8, majf=0, minf=106 00:46:03.368 IO depths : 1=1.4%, 2=19.5%, 4=54.5%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 issued rwts: total=4473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.368 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:03.368 filename1: (groupid=0, jobs=1): err= 0: pid=1085306: Mon Jul 22 23:26:38 2024 00:46:03.368 read: IOPS=889, BW=7115KiB/s (7286kB/s)(34.8MiB/5007msec) 00:46:03.368 slat (nsec): min=5376, max=99412, avg=27752.59, stdev=15345.33 00:46:03.368 clat (usec): min=1891, max=16005, avg=8896.64, stdev=909.38 00:46:03.368 lat (usec): min=1910, max=16034, avg=8924.39, stdev=910.04 00:46:03.368 clat percentiles (usec): 00:46:03.368 | 1.00th=[ 6194], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[ 8586], 00:46:03.368 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8979], 00:46:03.368 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9896], 00:46:03.368 | 99.00th=[13042], 99.50th=[13698], 99.90th=[15270], 99.95th=[15401], 00:46:03.368 | 99.99th=[16057] 00:46:03.368 bw ( KiB/s): min= 6912, max= 7264, per=25.06%, avg=7112.00, stdev=106.73, samples=10 00:46:03.368 iops : min= 864, max= 908, avg=889.00, stdev=13.34, samples=10 00:46:03.368 lat (msec) : 2=0.02%, 4=0.31%, 10=95.37%, 20=4.29% 00:46:03.368 cpu : usr=94.73%, sys=3.78%, ctx=128, majf=0, minf=84 00:46:03.368 IO depths : 1=0.7%, 2=16.4%, 4=56.5%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 issued rwts: total=4453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.368 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:03.368 filename1: (groupid=0, jobs=1): err= 0: pid=1085307: Mon Jul 22 23:26:38 2024 00:46:03.368 read: IOPS=891, BW=7133KiB/s (7304kB/s)(34.8MiB/5002msec) 00:46:03.368 slat (usec): min=5, max=122, avg=34.96, stdev=17.63 00:46:03.368 clat (usec): min=1691, max=16023, avg=8824.93, stdev=1135.16 00:46:03.368 lat (usec): min=1724, max=16061, avg=8859.89, stdev=1136.02 00:46:03.368 clat percentiles (usec): 00:46:03.368 | 1.00th=[ 3884], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8586], 00:46:03.368 | 30.00th=[ 8717], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8848], 00:46:03.368 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9765], 00:46:03.368 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15533], 99.95th=[15664], 00:46:03.368 | 99.99th=[16057] 00:46:03.368 bw ( KiB/s): min= 6912, max= 7408, per=25.12%, avg=7128.67, stdev=152.49, samples=9 00:46:03.368 iops : min= 864, max= 926, avg=891.00, stdev=19.08, samples=9 00:46:03.368 lat (msec) : 2=0.02%, 4=1.05%, 10=94.87%, 20=4.06% 00:46:03.368 cpu : usr=94.46%, sys=4.74%, ctx=9, majf=0, minf=124 00:46:03.368 IO depths : 1=2.1%, 2=22.4%, 4=52.2%, 8=23.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.368 issued rwts: total=4460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.368 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:03.368 00:46:03.368 Run status group 0 (all jobs): 00:46:03.368 READ: bw=27.7MiB/s (29.1MB/s), 6997KiB/s-7150KiB/s (7165kB/s-7321kB/s), io=139MiB (146MB), run=5002-5007msec 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.368 00:46:03.368 real 0m25.672s 00:46:03.368 user 4m32.435s 00:46:03.368 sys 0m7.888s 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:03.368 23:26:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:03.368 ************************************ 00:46:03.368 END TEST fio_dif_rand_params 00:46:03.368 ************************************ 00:46:03.368 23:26:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:46:03.368 23:26:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:46:03.368 23:26:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:03.368 23:26:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:03.368 23:26:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:03.369 ************************************ 00:46:03.369 START TEST fio_dif_digest 00:46:03.369 ************************************ 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:03.369 bdev_null0 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:03.369 [2024-07-22 23:26:39.413400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:46:03.369 { 00:46:03.369 "params": { 00:46:03.369 "name": "Nvme$subsystem", 00:46:03.369 "trtype": "$TEST_TRANSPORT", 00:46:03.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:03.369 "adrfam": "ipv4", 00:46:03.369 "trsvcid": "$NVMF_PORT", 00:46:03.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:03.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:03.369 "hdgst": ${hdgst:-false}, 00:46:03.369 "ddgst": ${ddgst:-false} 00:46:03.369 }, 00:46:03.369 "method": "bdev_nvme_attach_controller" 00:46:03.369 } 00:46:03.369 EOF 00:46:03.369 )") 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:46:03.369 "params": { 00:46:03.369 "name": "Nvme0", 00:46:03.369 "trtype": "tcp", 00:46:03.369 "traddr": "10.0.0.2", 00:46:03.369 "adrfam": "ipv4", 00:46:03.369 "trsvcid": "4420", 00:46:03.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:03.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:03.369 "hdgst": true, 00:46:03.369 "ddgst": true 00:46:03.369 }, 00:46:03.369 "method": "bdev_nvme_attach_controller" 00:46:03.369 }' 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:03.369 23:26:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:03.628 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:03.628 ... 00:46:03.628 fio-3.35 00:46:03.628 Starting 3 threads 00:46:03.628 EAL: No free 2048 kB hugepages reported on node 1 00:46:15.842 00:46:15.842 filename0: (groupid=0, jobs=1): err= 0: pid=1086173: Mon Jul 22 23:26:50 2024 00:46:15.842 read: IOPS=112, BW=14.0MiB/s (14.7MB/s)(141MiB/10051msec) 00:46:15.842 slat (usec): min=5, max=186, avg=32.66, stdev=12.51 00:46:15.842 clat (usec): min=15890, max=73677, avg=26651.25, stdev=5986.41 00:46:15.842 lat (usec): min=15917, max=73720, avg=26683.91, stdev=5992.45 00:46:15.842 clat percentiles (usec): 00:46:15.842 | 1.00th=[17433], 5.00th=[18220], 10.00th=[18744], 20.00th=[19268], 00:46:15.842 | 30.00th=[21103], 40.00th=[27657], 50.00th=[28705], 60.00th=[29492], 00:46:15.842 | 70.00th=[30278], 80.00th=[30802], 90.00th=[32113], 95.00th=[33162], 00:46:15.842 | 99.00th=[35390], 99.50th=[38536], 99.90th=[70779], 99.95th=[73925], 00:46:15.842 | 99.99th=[73925] 00:46:15.842 bw ( KiB/s): min=12032, max=19968, per=33.91%, avg=14401.95, stdev=2949.47, samples=20 00:46:15.842 iops : min= 94, max= 156, avg=112.50, stdev=23.01, samples=20 00:46:15.842 lat (msec) : 20=25.35%, 50=74.20%, 100=0.44% 00:46:15.842 cpu : usr=94.22%, sys=4.88%, ctx=42, majf=0, minf=145 00:46:15.842 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:15.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:15.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:15.842 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:15.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:15.842 filename0: (groupid=0, jobs=1): err= 0: pid=1086174: Mon Jul 22 23:26:50 2024 00:46:15.842 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(134MiB/10051msec) 00:46:15.842 slat (usec): min=6, max=147, avg=32.12, stdev=12.24 00:46:15.842 clat (usec): min=16866, max=58723, avg=28068.52, stdev=6045.17 00:46:15.842 lat (usec): min=16888, max=58766, avg=28100.64, stdev=6051.60 00:46:15.842 clat percentiles (usec): 00:46:15.842 | 1.00th=[17695], 5.00th=[18744], 10.00th=[19268], 20.00th=[20317], 00:46:15.842 | 30.00th=[21890], 40.00th=[29492], 50.00th=[30802], 60.00th=[31589], 00:46:15.842 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:46:15.842 | 99.00th=[36439], 99.50th=[37487], 99.90th=[54264], 99.95th=[58983], 00:46:15.842 | 99.99th=[58983] 00:46:15.842 bw ( KiB/s): min=11520, max=19712, per=32.22%, avg=13683.20, stdev=2960.47, samples=20 00:46:15.842 iops : min= 90, max= 154, avg=106.90, stdev=23.13, samples=20 00:46:15.842 lat (msec) : 20=17.93%, 50=81.89%, 100=0.19% 00:46:15.842 cpu : usr=94.10%, sys=5.12%, ctx=21, majf=0, minf=206 00:46:15.842 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:15.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:15.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:15.842 issued rwts: total=1071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:15.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:15.842 filename0: (groupid=0, jobs=1): err= 0: pid=1086175: Mon Jul 22 23:26:50 2024 00:46:15.842 read: IOPS=113, BW=14.1MiB/s (14.8MB/s)(142MiB/10051msec) 00:46:15.842 slat (usec): min=6, max=209, avg=39.24, stdev=15.58 00:46:15.842 clat (usec): min=13970, max=70469, avg=26456.01, stdev=6023.70 00:46:15.842 lat (usec): min=14021, max=70507, avg=26495.25, stdev=6028.57 00:46:15.842 clat percentiles (usec): 00:46:15.842 | 1.00th=[16909], 5.00th=[17695], 10.00th=[18482], 20.00th=[19268], 00:46:15.842 | 30.00th=[21103], 40.00th=[26870], 50.00th=[28705], 60.00th=[29492], 00:46:15.842 | 70.00th=[30278], 80.00th=[30802], 90.00th=[32113], 95.00th=[32900], 00:46:15.842 | 99.00th=[34866], 99.50th=[37487], 99.90th=[68682], 99.95th=[70779], 00:46:15.842 | 99.99th=[70779] 00:46:15.842 bw ( KiB/s): min=11264, max=20480, per=34.15%, avg=14502.40, stdev=3044.11, samples=20 00:46:15.842 iops : min= 88, max= 160, avg=113.30, stdev=23.78, samples=20 00:46:15.842 lat (msec) : 20=25.53%, 50=74.03%, 100=0.44% 00:46:15.842 cpu : usr=93.53%, sys=4.92%, ctx=231, majf=0, minf=101 00:46:15.842 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:15.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:15.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:15.842 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:15.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:15.842 00:46:15.842 Run status group 0 (all jobs): 00:46:15.842 READ: bw=41.5MiB/s (43.5MB/s), 13.3MiB/s-14.1MiB/s (14.0MB/s-14.8MB/s), io=417MiB (437MB), run=10051-10051msec 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:15.842 00:46:15.842 real 0m11.610s 00:46:15.842 user 0m29.728s 00:46:15.842 sys 0m2.031s 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:15.842 23:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:15.842 ************************************ 00:46:15.842 END TEST fio_dif_digest 00:46:15.842 ************************************ 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:46:15.842 23:26:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:15.842 23:26:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:15.842 rmmod nvme_tcp 00:46:15.842 rmmod nvme_fabrics 00:46:15.842 rmmod nvme_keyring 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1080123 ']' 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1080123 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1080123 ']' 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1080123 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1080123 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1080123' 00:46:15.842 killing process with pid 1080123 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1080123 00:46:15.842 23:26:51 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1080123 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:46:15.842 23:26:51 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:17.222 Waiting for block devices as requested 00:46:17.222 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:46:17.222 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:17.222 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:17.480 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:17.480 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:17.481 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:17.740 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:17.740 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:17.740 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:18.000 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:18.000 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:18.000 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:18.259 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:18.259 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:18.259 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:18.259 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:18.519 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:18.519 23:26:54 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:18.519 23:26:54 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:18.519 23:26:54 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:18.519 23:26:54 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:18.519 23:26:54 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:18.519 23:26:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:18.519 23:26:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:21.057 23:26:56 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:46:21.057 00:46:21.057 real 1m12.270s 00:46:21.057 user 6m32.688s 00:46:21.057 sys 0m22.649s 00:46:21.057 23:26:56 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:21.058 23:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:21.058 ************************************ 00:46:21.058 END TEST nvmf_dif 00:46:21.058 ************************************ 00:46:21.058 23:26:56 -- common/autotest_common.sh@1142 -- # return 0 00:46:21.058 23:26:56 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:21.058 23:26:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:21.058 23:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:21.058 23:26:56 -- common/autotest_common.sh@10 -- # set +x 00:46:21.058 ************************************ 00:46:21.058 START TEST nvmf_abort_qd_sizes 00:46:21.058 ************************************ 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:21.058 * Looking for test storage... 00:46:21.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:21.058 23:26:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:46:21.058 23:26:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:46:24.349 Found 0000:84:00.0 (0x8086 - 0x159b) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:46:24.349 Found 0000:84:00.1 (0x8086 - 0x159b) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:24.349 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:46:24.350 Found net devices under 0000:84:00.0: cvl_0_0 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:46:24.350 Found net devices under 0000:84:00.1: cvl_0_1 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:46:24.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:24.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:46:24.350 00:46:24.350 --- 10.0.0.2 ping statistics --- 00:46:24.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:24.350 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:24.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:24.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:46:24.350 00:46:24.350 --- 10.0.0.1 ping statistics --- 00:46:24.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:24.350 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:46:24.350 23:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:25.731 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:25.731 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:25.731 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:25.731 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:25.731 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:25.731 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:25.731 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:25.731 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:25.731 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:26.675 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1091231 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1091231 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1091231 ']' 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:26.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:26.960 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:26.960 [2024-07-22 23:27:03.202569] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:46:26.960 [2024-07-22 23:27:03.202740] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:27.231 EAL: No free 2048 kB hugepages reported on node 1 00:46:27.231 [2024-07-22 23:27:03.351559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:27.231 [2024-07-22 23:27:03.506427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:27.231 [2024-07-22 23:27:03.506498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:27.231 [2024-07-22 23:27:03.506518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:27.231 [2024-07-22 23:27:03.506534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:27.231 [2024-07-22 23:27:03.506549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:27.231 [2024-07-22 23:27:03.506700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:27.231 [2024-07-22 23:27:03.506765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:46:27.231 [2024-07-22 23:27:03.509337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:46:27.231 [2024-07-22 23:27:03.509353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:27.489 23:27:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:27.489 ************************************ 00:46:27.489 START TEST spdk_target_abort 00:46:27.489 ************************************ 00:46:27.489 23:27:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:46:27.489 23:27:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:27.489 23:27:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:46:27.489 23:27:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:27.489 23:27:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.773 spdk_targetn1 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.773 [2024-07-22 23:27:06.647207] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.773 [2024-07-22 23:27:06.681915] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:30.773 23:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:30.773 EAL: No free 2048 kB hugepages reported on node 1 00:46:34.056 Initializing NVMe Controllers 00:46:34.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:34.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:34.056 Initialization complete. Launching workers. 00:46:34.056 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9191, failed: 0 00:46:34.056 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7937 00:46:34.056 success 744, unsuccess 510, failed 0 00:46:34.056 23:27:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:34.056 23:27:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:34.056 EAL: No free 2048 kB hugepages reported on node 1 00:46:37.338 Initializing NVMe Controllers 00:46:37.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:37.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:37.338 Initialization complete. Launching workers. 00:46:37.338 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8410, failed: 0 00:46:37.338 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7178 00:46:37.338 success 295, unsuccess 937, failed 0 00:46:37.338 23:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:37.338 23:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:37.338 EAL: No free 2048 kB hugepages reported on node 1 00:46:40.622 Initializing NVMe Controllers 00:46:40.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:40.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:40.622 Initialization complete. Launching workers. 00:46:40.622 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27941, failed: 0 00:46:40.622 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2662, failed to submit 25279 00:46:40.622 success 262, unsuccess 2400, failed 0 00:46:40.622 23:27:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:40.622 23:27:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.622 23:27:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:40.622 23:27:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.622 23:27:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:40.622 23:27:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.622 23:27:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1091231 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1091231 ']' 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1091231 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1091231 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1091231' 00:46:41.997 killing process with pid 1091231 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1091231 00:46:41.997 23:27:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1091231 00:46:41.997 00:46:41.997 real 0m14.465s 00:46:41.997 user 0m55.092s 00:46:41.997 sys 0m2.870s 00:46:41.997 23:27:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:41.997 23:27:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:41.997 ************************************ 00:46:41.997 END TEST spdk_target_abort 00:46:41.997 ************************************ 00:46:41.997 23:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:46:41.997 23:27:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:41.997 23:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:41.997 23:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:41.997 23:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:42.257 ************************************ 00:46:42.257 START TEST kernel_target_abort 00:46:42.257 ************************************ 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:42.257 23:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:44.166 Waiting for block devices as requested 00:46:44.166 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:46:44.166 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:44.166 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:44.426 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:44.427 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:44.427 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:44.687 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:44.687 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:44.687 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:44.946 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:44.946 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:44.946 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:45.205 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:45.205 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:45.205 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:45.464 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:45.464 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:46:45.464 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:45.724 No valid GPT data, bailing 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:46:45.724 00:46:45.724 Discovery Log Number of Records 2, Generation counter 2 00:46:45.724 =====Discovery Log Entry 0====== 00:46:45.724 trtype: tcp 00:46:45.724 adrfam: ipv4 00:46:45.724 subtype: current discovery subsystem 00:46:45.724 treq: not specified, sq flow control disable supported 00:46:45.724 portid: 1 00:46:45.724 trsvcid: 4420 00:46:45.724 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:45.724 traddr: 10.0.0.1 00:46:45.724 eflags: none 00:46:45.724 sectype: none 00:46:45.724 =====Discovery Log Entry 1====== 00:46:45.724 trtype: tcp 00:46:45.724 adrfam: ipv4 00:46:45.724 subtype: nvme subsystem 00:46:45.724 treq: not specified, sq flow control disable supported 00:46:45.724 portid: 1 00:46:45.724 trsvcid: 4420 00:46:45.724 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:45.724 traddr: 10.0.0.1 00:46:45.724 eflags: none 00:46:45.724 sectype: none 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:45.724 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:45.725 23:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:45.725 EAL: No free 2048 kB hugepages reported on node 1 00:46:49.013 Initializing NVMe Controllers 00:46:49.013 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:49.013 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:49.013 Initialization complete. Launching workers. 00:46:49.013 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24127, failed: 0 00:46:49.013 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24127, failed to submit 0 00:46:49.013 success 0, unsuccess 24127, failed 0 00:46:49.013 23:27:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:49.013 23:27:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:49.013 EAL: No free 2048 kB hugepages reported on node 1 00:46:52.324 Initializing NVMe Controllers 00:46:52.324 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:52.324 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:52.324 Initialization complete. Launching workers. 00:46:52.324 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40254, failed: 0 00:46:52.324 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 10154, failed to submit 30100 00:46:52.324 success 0, unsuccess 10154, failed 0 00:46:52.324 23:27:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:52.324 23:27:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:52.324 EAL: No free 2048 kB hugepages reported on node 1 00:46:55.610 Initializing NVMe Controllers 00:46:55.610 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:55.610 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:55.610 Initialization complete. Launching workers. 00:46:55.610 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39836, failed: 0 00:46:55.610 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 9962, failed to submit 29874 00:46:55.610 success 0, unsuccess 9962, failed 0 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:46:55.610 23:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:57.518 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:57.518 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:57.518 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:57.518 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:57.518 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:57.518 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:57.518 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:57.518 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:57.518 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:58.088 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:46:58.348 00:46:58.348 real 0m16.246s 00:46:58.348 user 0m7.039s 00:46:58.348 sys 0m4.473s 00:46:58.348 23:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:58.348 23:27:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:58.348 ************************************ 00:46:58.348 END TEST kernel_target_abort 00:46:58.348 ************************************ 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:58.348 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:58.348 rmmod nvme_tcp 00:46:58.348 rmmod nvme_fabrics 00:46:58.348 rmmod nvme_keyring 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1091231 ']' 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1091231 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1091231 ']' 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1091231 00:46:58.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1091231) - No such process 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1091231 is not found' 00:46:58.608 Process with pid 1091231 is not found 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:46:58.608 23:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:59.985 Waiting for block devices as requested 00:47:00.243 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:47:00.243 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:47:00.503 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:47:00.503 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:47:00.503 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:47:00.762 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:47:00.762 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:47:00.762 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:47:01.022 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:47:01.022 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:47:01.022 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:47:01.022 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:47:01.282 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:47:01.282 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:47:01.282 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:47:01.542 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:47:01.542 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:47:01.542 23:27:37 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:01.802 23:27:37 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:01.802 23:27:37 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:01.802 23:27:37 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:01.802 23:27:37 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:01.802 23:27:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:01.802 23:27:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:03.712 23:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:47:03.712 00:47:03.712 real 0m43.038s 00:47:03.712 user 1m5.255s 00:47:03.712 sys 0m12.669s 00:47:03.712 23:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:03.712 23:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:03.712 ************************************ 00:47:03.712 END TEST nvmf_abort_qd_sizes 00:47:03.712 ************************************ 00:47:03.712 23:27:39 -- common/autotest_common.sh@1142 -- # return 0 00:47:03.712 23:27:39 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:03.712 23:27:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:03.712 23:27:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:03.712 23:27:39 -- common/autotest_common.sh@10 -- # set +x 00:47:03.712 ************************************ 00:47:03.712 START TEST keyring_file 00:47:03.712 ************************************ 00:47:03.712 23:27:39 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:03.972 * Looking for test storage... 00:47:03.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:03.972 23:27:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:03.972 23:27:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:03.972 23:27:40 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:03.972 23:27:40 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:03.972 23:27:40 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:03.972 23:27:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.972 23:27:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.972 23:27:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.972 23:27:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:47:03.972 23:27:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@47 -- # : 0 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:03.972 23:27:40 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:03.972 23:27:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:03.972 23:27:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:03.972 23:27:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:03.972 23:27:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:47:03.973 23:27:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:47:03.973 23:27:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:47:03.973 23:27:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xcKIt6ZUQ8 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xcKIt6ZUQ8 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xcKIt6ZUQ8 00:47:03.973 23:27:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xcKIt6ZUQ8 00:47:03.973 23:27:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EVGe7cJNcI 00:47:03.973 23:27:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:47:03.973 23:27:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:47:04.232 23:27:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EVGe7cJNcI 00:47:04.232 23:27:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EVGe7cJNcI 00:47:04.232 23:27:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.EVGe7cJNcI 00:47:04.232 23:27:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=1097630 00:47:04.232 23:27:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:04.232 23:27:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1097630 00:47:04.232 23:27:40 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1097630 ']' 00:47:04.232 23:27:40 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:04.232 23:27:40 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:04.232 23:27:40 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:04.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:04.232 23:27:40 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:04.232 23:27:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:04.232 [2024-07-22 23:27:40.467890] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:47:04.232 [2024-07-22 23:27:40.468015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097630 ] 00:47:04.232 EAL: No free 2048 kB hugepages reported on node 1 00:47:04.492 [2024-07-22 23:27:40.642025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:04.492 [2024-07-22 23:27:40.793329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:47:05.061 23:27:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:05.061 [2024-07-22 23:27:41.218716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:05.061 null0 00:47:05.061 [2024-07-22 23:27:41.251983] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:05.061 [2024-07-22 23:27:41.252790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:05.061 [2024-07-22 23:27:41.259961] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.061 23:27:41 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:05.061 [2024-07-22 23:27:41.271997] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:47:05.061 request: 00:47:05.061 { 00:47:05.061 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:47:05.061 "secure_channel": false, 00:47:05.061 "listen_address": { 00:47:05.061 "trtype": "tcp", 00:47:05.061 "traddr": "127.0.0.1", 00:47:05.061 "trsvcid": "4420" 00:47:05.061 }, 00:47:05.061 "method": "nvmf_subsystem_add_listener", 00:47:05.061 "req_id": 1 00:47:05.061 } 00:47:05.061 Got JSON-RPC error response 00:47:05.061 response: 00:47:05.061 { 00:47:05.061 "code": -32602, 00:47:05.061 "message": "Invalid parameters" 00:47:05.061 } 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:05.061 23:27:41 keyring_file -- keyring/file.sh@46 -- # bperfpid=1097764 00:47:05.061 23:27:41 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:47:05.061 23:27:41 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1097764 /var/tmp/bperf.sock 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1097764 ']' 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:05.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:05.061 23:27:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:05.061 [2024-07-22 23:27:41.360858] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:47:05.061 [2024-07-22 23:27:41.361020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097764 ] 00:47:05.320 EAL: No free 2048 kB hugepages reported on node 1 00:47:05.320 [2024-07-22 23:27:41.463594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:05.320 [2024-07-22 23:27:41.573419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:05.578 23:27:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:05.578 23:27:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:47:05.578 23:27:41 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:05.578 23:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:06.145 23:27:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.EVGe7cJNcI 00:47:06.403 23:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.EVGe7cJNcI 00:47:06.969 23:27:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:47:06.969 23:27:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:47:06.969 23:27:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:06.969 23:27:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:06.969 23:27:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:07.227 23:27:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.xcKIt6ZUQ8 == \/\t\m\p\/\t\m\p\.\x\c\K\I\t\6\Z\U\Q\8 ]] 00:47:07.227 23:27:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:47:07.227 23:27:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:47:07.227 23:27:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.227 23:27:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.227 23:27:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:07.790 23:27:44 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.EVGe7cJNcI == \/\t\m\p\/\t\m\p\.\E\V\G\e\7\c\J\N\c\I ]] 00:47:07.790 23:27:44 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:47:07.790 23:27:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:07.790 23:27:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:07.790 23:27:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.790 23:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.790 23:27:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:08.723 23:27:44 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:47:08.723 23:27:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:47:08.723 23:27:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:08.723 23:27:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:08.723 23:27:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:08.723 23:27:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:08.723 23:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:09.289 23:27:45 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:47:09.289 23:27:45 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:09.289 23:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:09.289 [2024-07-22 23:27:45.583238] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:09.547 nvme0n1 00:47:09.547 23:27:45 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:47:09.547 23:27:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:09.547 23:27:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:09.547 23:27:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:09.547 23:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:09.547 23:27:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:09.804 23:27:46 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:47:09.804 23:27:46 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:47:09.804 23:27:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:09.804 23:27:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:09.804 23:27:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:09.804 23:27:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:09.804 23:27:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:10.414 23:27:46 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:47:10.414 23:27:46 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:10.678 Running I/O for 1 seconds... 00:47:12.053 00:47:12.053 Latency(us) 00:47:12.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:12.053 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:47:12.053 nvme0n1 : 1.01 6828.72 26.67 0.00 0.00 18653.25 5971.06 27962.03 00:47:12.053 =================================================================================================================== 00:47:12.053 Total : 6828.72 26.67 0.00 0.00 18653.25 5971.06 27962.03 00:47:12.053 0 00:47:12.053 23:27:47 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:12.053 23:27:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:12.311 23:27:48 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:47:12.311 23:27:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:12.311 23:27:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:12.311 23:27:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:12.311 23:27:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:12.312 23:27:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.246 23:27:49 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:47:13.246 23:27:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:47:13.246 23:27:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:13.246 23:27:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:13.246 23:27:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:13.246 23:27:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.246 23:27:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:13.505 23:27:49 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:47:13.505 23:27:49 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:13.505 23:27:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:47:13.505 23:27:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:13.505 23:27:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:47:13.505 23:27:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:13.505 23:27:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:47:13.505 23:27:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:13.505 23:27:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:13.505 23:27:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:14.072 [2024-07-22 23:27:50.284950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:14.072 [2024-07-22 23:27:50.285417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2426ae0 (107): Transport endpoint is not connected 00:47:14.072 [2024-07-22 23:27:50.286396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2426ae0 (9): Bad file descriptor 00:47:14.072 [2024-07-22 23:27:50.287392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:14.072 [2024-07-22 23:27:50.287419] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:14.072 [2024-07-22 23:27:50.287437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:14.072 request: 00:47:14.072 { 00:47:14.072 "name": "nvme0", 00:47:14.072 "trtype": "tcp", 00:47:14.072 "traddr": "127.0.0.1", 00:47:14.072 "adrfam": "ipv4", 00:47:14.072 "trsvcid": "4420", 00:47:14.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:14.072 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:14.072 "prchk_reftag": false, 00:47:14.072 "prchk_guard": false, 00:47:14.072 "hdgst": false, 00:47:14.072 "ddgst": false, 00:47:14.072 "psk": "key1", 00:47:14.072 "method": "bdev_nvme_attach_controller", 00:47:14.072 "req_id": 1 00:47:14.072 } 00:47:14.072 Got JSON-RPC error response 00:47:14.072 response: 00:47:14.072 { 00:47:14.072 "code": -5, 00:47:14.072 "message": "Input/output error" 00:47:14.072 } 00:47:14.072 23:27:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:47:14.072 23:27:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:14.072 23:27:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:14.072 23:27:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:14.072 23:27:50 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:47:14.072 23:27:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:14.072 23:27:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:14.072 23:27:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:14.072 23:27:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:14.072 23:27:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:14.638 23:27:50 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:47:14.638 23:27:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:47:14.638 23:27:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:14.638 23:27:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:14.638 23:27:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:14.638 23:27:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:14.638 23:27:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:14.895 23:27:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:47:14.895 23:27:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:47:14.895 23:27:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:15.460 23:27:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:47:15.460 23:27:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:16.031 23:27:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:47:16.031 23:27:52 keyring_file -- keyring/file.sh@77 -- # jq length 00:47:16.031 23:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:16.965 23:27:52 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:47:16.965 23:27:52 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xcKIt6ZUQ8 00:47:16.965 23:27:52 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:16.965 23:27:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:47:16.965 23:27:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:16.965 23:27:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:47:16.965 23:27:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:16.965 23:27:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:47:16.965 23:27:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:16.965 23:27:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:16.965 23:27:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:17.222 [2024-07-22 23:27:53.489505] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xcKIt6ZUQ8': 0100660 00:47:17.222 [2024-07-22 23:27:53.489555] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:17.222 request: 00:47:17.222 { 00:47:17.222 "name": "key0", 00:47:17.222 "path": "/tmp/tmp.xcKIt6ZUQ8", 00:47:17.222 "method": "keyring_file_add_key", 00:47:17.222 "req_id": 1 00:47:17.222 } 00:47:17.222 Got JSON-RPC error response 00:47:17.222 response: 00:47:17.222 { 00:47:17.222 "code": -1, 00:47:17.222 "message": "Operation not permitted" 00:47:17.222 } 00:47:17.222 23:27:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:47:17.222 23:27:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:17.222 23:27:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:17.222 23:27:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:17.222 23:27:53 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xcKIt6ZUQ8 00:47:17.222 23:27:53 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:17.222 23:27:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xcKIt6ZUQ8 00:47:18.155 23:27:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xcKIt6ZUQ8 00:47:18.155 23:27:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:47:18.155 23:27:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:18.155 23:27:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:18.155 23:27:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:18.155 23:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:18.155 23:27:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:18.413 23:27:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:47:18.413 23:27:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:18.413 23:27:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:47:18.413 23:27:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:18.413 23:27:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:47:18.413 23:27:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:18.413 23:27:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:47:18.413 23:27:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:18.413 23:27:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:18.413 23:27:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:18.979 [2024-07-22 23:27:55.210075] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xcKIt6ZUQ8': No such file or directory 00:47:18.979 [2024-07-22 23:27:55.210127] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:18.979 [2024-07-22 23:27:55.210175] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:18.979 [2024-07-22 23:27:55.210192] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:18.979 [2024-07-22 23:27:55.210207] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:18.979 request: 00:47:18.979 { 00:47:18.979 "name": "nvme0", 00:47:18.979 "trtype": "tcp", 00:47:18.979 "traddr": "127.0.0.1", 00:47:18.979 "adrfam": "ipv4", 00:47:18.979 "trsvcid": "4420", 00:47:18.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:18.979 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:18.979 "prchk_reftag": false, 00:47:18.979 "prchk_guard": false, 00:47:18.979 "hdgst": false, 00:47:18.979 "ddgst": false, 00:47:18.979 "psk": "key0", 00:47:18.979 "method": "bdev_nvme_attach_controller", 00:47:18.979 "req_id": 1 00:47:18.979 } 00:47:18.979 Got JSON-RPC error response 00:47:18.979 response: 00:47:18.979 { 00:47:18.979 "code": -19, 00:47:18.979 "message": "No such device" 00:47:18.979 } 00:47:18.979 23:27:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:47:18.979 23:27:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:18.979 23:27:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:18.979 23:27:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:18.979 23:27:55 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:47:18.979 23:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:19.544 23:27:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:19.544 23:27:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:19.544 23:27:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:19.544 23:27:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:19.544 23:27:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:19.544 23:27:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:19.802 23:27:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.g72qc6aBdx 00:47:19.802 23:27:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:19.802 23:27:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:19.802 23:27:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:47:19.802 23:27:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:47:19.802 23:27:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:47:19.802 23:27:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:47:19.802 23:27:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:47:19.802 23:27:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.g72qc6aBdx 00:47:19.802 23:27:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.g72qc6aBdx 00:47:19.802 23:27:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.g72qc6aBdx 00:47:19.802 23:27:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g72qc6aBdx 00:47:19.802 23:27:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g72qc6aBdx 00:47:20.367 23:27:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:20.367 23:27:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:20.933 nvme0n1 00:47:21.191 23:27:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:47:21.191 23:27:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:21.191 23:27:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:21.191 23:27:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:21.191 23:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:21.191 23:27:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:21.756 23:27:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:47:21.756 23:27:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:47:21.756 23:27:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:22.322 23:27:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:47:22.322 23:27:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:47:22.322 23:27:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:22.322 23:27:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:22.322 23:27:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:22.888 23:27:59 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:47:22.888 23:27:59 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:47:22.888 23:27:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:22.888 23:27:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:22.888 23:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:22.888 23:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:22.888 23:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:23.455 23:27:59 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:47:23.455 23:27:59 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:23.455 23:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:23.713 23:27:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:47:23.714 23:27:59 keyring_file -- keyring/file.sh@104 -- # jq length 00:47:23.714 23:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:24.279 23:28:00 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:47:24.279 23:28:00 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g72qc6aBdx 00:47:24.279 23:28:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g72qc6aBdx 00:47:24.846 23:28:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.EVGe7cJNcI 00:47:24.846 23:28:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.EVGe7cJNcI 00:47:25.105 23:28:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:25.105 23:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:25.671 nvme0n1 00:47:25.671 23:28:01 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:47:25.671 23:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:26.238 23:28:02 keyring_file -- keyring/file.sh@112 -- # config='{ 00:47:26.238 "subsystems": [ 00:47:26.238 { 00:47:26.238 "subsystem": "keyring", 00:47:26.238 "config": [ 00:47:26.238 { 00:47:26.238 "method": "keyring_file_add_key", 00:47:26.238 "params": { 00:47:26.238 "name": "key0", 00:47:26.238 "path": "/tmp/tmp.g72qc6aBdx" 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "keyring_file_add_key", 00:47:26.238 "params": { 00:47:26.238 "name": "key1", 00:47:26.238 "path": "/tmp/tmp.EVGe7cJNcI" 00:47:26.238 } 00:47:26.238 } 00:47:26.238 ] 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "subsystem": "iobuf", 00:47:26.238 "config": [ 00:47:26.238 { 00:47:26.238 "method": "iobuf_set_options", 00:47:26.238 "params": { 00:47:26.238 "small_pool_count": 8192, 00:47:26.238 "large_pool_count": 1024, 00:47:26.238 "small_bufsize": 8192, 00:47:26.238 "large_bufsize": 135168 00:47:26.238 } 00:47:26.238 } 00:47:26.238 ] 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "subsystem": "sock", 00:47:26.238 "config": [ 00:47:26.238 { 00:47:26.238 "method": "sock_set_default_impl", 00:47:26.238 "params": { 00:47:26.238 "impl_name": "posix" 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "sock_impl_set_options", 00:47:26.238 "params": { 00:47:26.238 "impl_name": "ssl", 00:47:26.238 "recv_buf_size": 4096, 00:47:26.238 "send_buf_size": 4096, 00:47:26.238 "enable_recv_pipe": true, 00:47:26.238 "enable_quickack": false, 00:47:26.238 "enable_placement_id": 0, 00:47:26.238 "enable_zerocopy_send_server": true, 00:47:26.238 "enable_zerocopy_send_client": false, 00:47:26.238 "zerocopy_threshold": 0, 00:47:26.238 "tls_version": 0, 00:47:26.238 "enable_ktls": false 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "sock_impl_set_options", 00:47:26.238 "params": { 00:47:26.238 "impl_name": "posix", 00:47:26.238 "recv_buf_size": 2097152, 00:47:26.238 "send_buf_size": 2097152, 00:47:26.238 "enable_recv_pipe": true, 00:47:26.238 "enable_quickack": false, 00:47:26.238 "enable_placement_id": 0, 00:47:26.238 "enable_zerocopy_send_server": true, 00:47:26.238 "enable_zerocopy_send_client": false, 00:47:26.238 "zerocopy_threshold": 0, 00:47:26.238 "tls_version": 0, 00:47:26.238 "enable_ktls": false 00:47:26.238 } 00:47:26.238 } 00:47:26.238 ] 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "subsystem": "vmd", 00:47:26.238 "config": [] 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "subsystem": "accel", 00:47:26.238 "config": [ 00:47:26.238 { 00:47:26.238 "method": "accel_set_options", 00:47:26.238 "params": { 00:47:26.238 "small_cache_size": 128, 00:47:26.238 "large_cache_size": 16, 00:47:26.238 "task_count": 2048, 00:47:26.238 "sequence_count": 2048, 00:47:26.238 "buf_count": 2048 00:47:26.238 } 00:47:26.238 } 00:47:26.238 ] 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "subsystem": "bdev", 00:47:26.238 "config": [ 00:47:26.238 { 00:47:26.238 "method": "bdev_set_options", 00:47:26.238 "params": { 00:47:26.238 "bdev_io_pool_size": 65535, 00:47:26.238 "bdev_io_cache_size": 256, 00:47:26.238 "bdev_auto_examine": true, 00:47:26.238 "iobuf_small_cache_size": 128, 00:47:26.238 "iobuf_large_cache_size": 16 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "bdev_raid_set_options", 00:47:26.238 "params": { 00:47:26.238 "process_window_size_kb": 1024, 00:47:26.238 "process_max_bandwidth_mb_sec": 0 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "bdev_iscsi_set_options", 00:47:26.238 "params": { 00:47:26.238 "timeout_sec": 30 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "bdev_nvme_set_options", 00:47:26.238 "params": { 00:47:26.238 "action_on_timeout": "none", 00:47:26.238 "timeout_us": 0, 00:47:26.238 "timeout_admin_us": 0, 00:47:26.238 "keep_alive_timeout_ms": 10000, 00:47:26.238 "arbitration_burst": 0, 00:47:26.238 "low_priority_weight": 0, 00:47:26.238 "medium_priority_weight": 0, 00:47:26.238 "high_priority_weight": 0, 00:47:26.238 "nvme_adminq_poll_period_us": 10000, 00:47:26.238 "nvme_ioq_poll_period_us": 0, 00:47:26.238 "io_queue_requests": 512, 00:47:26.238 "delay_cmd_submit": true, 00:47:26.238 "transport_retry_count": 4, 00:47:26.238 "bdev_retry_count": 3, 00:47:26.238 "transport_ack_timeout": 0, 00:47:26.238 "ctrlr_loss_timeout_sec": 0, 00:47:26.238 "reconnect_delay_sec": 0, 00:47:26.238 "fast_io_fail_timeout_sec": 0, 00:47:26.238 "disable_auto_failback": false, 00:47:26.238 "generate_uuids": false, 00:47:26.238 "transport_tos": 0, 00:47:26.238 "nvme_error_stat": false, 00:47:26.238 "rdma_srq_size": 0, 00:47:26.238 "io_path_stat": false, 00:47:26.238 "allow_accel_sequence": false, 00:47:26.238 "rdma_max_cq_size": 0, 00:47:26.238 "rdma_cm_event_timeout_ms": 0, 00:47:26.238 "dhchap_digests": [ 00:47:26.238 "sha256", 00:47:26.238 "sha384", 00:47:26.238 "sha512" 00:47:26.238 ], 00:47:26.238 "dhchap_dhgroups": [ 00:47:26.238 "null", 00:47:26.238 "ffdhe2048", 00:47:26.238 "ffdhe3072", 00:47:26.238 "ffdhe4096", 00:47:26.238 "ffdhe6144", 00:47:26.238 "ffdhe8192" 00:47:26.238 ] 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "bdev_nvme_attach_controller", 00:47:26.238 "params": { 00:47:26.238 "name": "nvme0", 00:47:26.238 "trtype": "TCP", 00:47:26.238 "adrfam": "IPv4", 00:47:26.238 "traddr": "127.0.0.1", 00:47:26.238 "trsvcid": "4420", 00:47:26.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:26.238 "prchk_reftag": false, 00:47:26.238 "prchk_guard": false, 00:47:26.238 "ctrlr_loss_timeout_sec": 0, 00:47:26.238 "reconnect_delay_sec": 0, 00:47:26.238 "fast_io_fail_timeout_sec": 0, 00:47:26.238 "psk": "key0", 00:47:26.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:26.238 "hdgst": false, 00:47:26.238 "ddgst": false 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "bdev_nvme_set_hotplug", 00:47:26.238 "params": { 00:47:26.238 "period_us": 100000, 00:47:26.238 "enable": false 00:47:26.238 } 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "method": "bdev_wait_for_examine" 00:47:26.238 } 00:47:26.238 ] 00:47:26.238 }, 00:47:26.238 { 00:47:26.238 "subsystem": "nbd", 00:47:26.238 "config": [] 00:47:26.238 } 00:47:26.238 ] 00:47:26.238 }' 00:47:26.238 23:28:02 keyring_file -- keyring/file.sh@114 -- # killprocess 1097764 00:47:26.238 23:28:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1097764 ']' 00:47:26.238 23:28:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1097764 00:47:26.239 23:28:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:47:26.497 23:28:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:26.497 23:28:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1097764 00:47:26.497 23:28:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:47:26.497 23:28:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:47:26.497 23:28:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1097764' 00:47:26.497 killing process with pid 1097764 00:47:26.497 23:28:02 keyring_file -- common/autotest_common.sh@967 -- # kill 1097764 00:47:26.497 Received shutdown signal, test time was about 1.000000 seconds 00:47:26.497 00:47:26.497 Latency(us) 00:47:26.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:26.497 =================================================================================================================== 00:47:26.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:26.497 23:28:02 keyring_file -- common/autotest_common.sh@972 -- # wait 1097764 00:47:26.758 23:28:02 keyring_file -- keyring/file.sh@117 -- # bperfpid=1100173 00:47:26.758 23:28:02 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1100173 /var/tmp/bperf.sock 00:47:26.758 23:28:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1100173 ']' 00:47:26.758 23:28:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:26.758 23:28:02 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:26.758 23:28:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:26.758 23:28:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:26.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:26.758 23:28:02 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:47:26.758 "subsystems": [ 00:47:26.758 { 00:47:26.758 "subsystem": "keyring", 00:47:26.758 "config": [ 00:47:26.758 { 00:47:26.758 "method": "keyring_file_add_key", 00:47:26.758 "params": { 00:47:26.758 "name": "key0", 00:47:26.758 "path": "/tmp/tmp.g72qc6aBdx" 00:47:26.758 } 00:47:26.758 }, 00:47:26.758 { 00:47:26.758 "method": "keyring_file_add_key", 00:47:26.758 "params": { 00:47:26.758 "name": "key1", 00:47:26.758 "path": "/tmp/tmp.EVGe7cJNcI" 00:47:26.758 } 00:47:26.758 } 00:47:26.758 ] 00:47:26.758 }, 00:47:26.758 { 00:47:26.758 "subsystem": "iobuf", 00:47:26.758 "config": [ 00:47:26.758 { 00:47:26.759 "method": "iobuf_set_options", 00:47:26.759 "params": { 00:47:26.759 "small_pool_count": 8192, 00:47:26.759 "large_pool_count": 1024, 00:47:26.759 "small_bufsize": 8192, 00:47:26.759 "large_bufsize": 135168 00:47:26.759 } 00:47:26.759 } 00:47:26.759 ] 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "subsystem": "sock", 00:47:26.759 "config": [ 00:47:26.759 { 00:47:26.759 "method": "sock_set_default_impl", 00:47:26.759 "params": { 00:47:26.759 "impl_name": "posix" 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "sock_impl_set_options", 00:47:26.759 "params": { 00:47:26.759 "impl_name": "ssl", 00:47:26.759 "recv_buf_size": 4096, 00:47:26.759 "send_buf_size": 4096, 00:47:26.759 "enable_recv_pipe": true, 00:47:26.759 "enable_quickack": false, 00:47:26.759 "enable_placement_id": 0, 00:47:26.759 "enable_zerocopy_send_server": true, 00:47:26.759 "enable_zerocopy_send_client": false, 00:47:26.759 "zerocopy_threshold": 0, 00:47:26.759 "tls_version": 0, 00:47:26.759 "enable_ktls": false 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "sock_impl_set_options", 00:47:26.759 "params": { 00:47:26.759 "impl_name": "posix", 00:47:26.759 "recv_buf_size": 2097152, 00:47:26.759 "send_buf_size": 2097152, 00:47:26.759 "enable_recv_pipe": true, 00:47:26.759 "enable_quickack": false, 00:47:26.759 "enable_placement_id": 0, 00:47:26.759 "enable_zerocopy_send_server": true, 00:47:26.759 "enable_zerocopy_send_client": false, 00:47:26.759 "zerocopy_threshold": 0, 00:47:26.759 "tls_version": 0, 00:47:26.759 "enable_ktls": false 00:47:26.759 } 00:47:26.759 } 00:47:26.759 ] 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "subsystem": "vmd", 00:47:26.759 "config": [] 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "subsystem": "accel", 00:47:26.759 "config": [ 00:47:26.759 { 00:47:26.759 "method": "accel_set_options", 00:47:26.759 "params": { 00:47:26.759 "small_cache_size": 128, 00:47:26.759 "large_cache_size": 16, 00:47:26.759 "task_count": 2048, 00:47:26.759 "sequence_count": 2048, 00:47:26.759 "buf_count": 2048 00:47:26.759 } 00:47:26.759 } 00:47:26.759 ] 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "subsystem": "bdev", 00:47:26.759 "config": [ 00:47:26.759 { 00:47:26.759 "method": "bdev_set_options", 00:47:26.759 "params": { 00:47:26.759 "bdev_io_pool_size": 65535, 00:47:26.759 "bdev_io_cache_size": 256, 00:47:26.759 "bdev_auto_examine": true, 00:47:26.759 "iobuf_small_cache_size": 128, 00:47:26.759 "iobuf_large_cache_size": 16 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "bdev_raid_set_options", 00:47:26.759 "params": { 00:47:26.759 "process_window_size_kb": 1024, 00:47:26.759 "process_max_bandwidth_mb_sec": 0 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "bdev_iscsi_set_options", 00:47:26.759 "params": { 00:47:26.759 "timeout_sec": 30 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "bdev_nvme_set_options", 00:47:26.759 "params": { 00:47:26.759 "action_on_timeout": "none", 00:47:26.759 "timeout_us": 0, 00:47:26.759 "timeout_admin_us": 0, 00:47:26.759 "keep_alive_timeout_ms": 10000, 00:47:26.759 "arbitration_burst": 0, 00:47:26.759 "low_priority_weight": 0, 00:47:26.759 "medium_priority_weight": 0, 00:47:26.759 "high_priority_weight": 0, 00:47:26.759 "nvme_adminq_poll_period_us": 10000, 00:47:26.759 "nvme_ioq_poll_period_us": 0, 00:47:26.759 "io_queue_requests": 512, 00:47:26.759 "delay_cmd_submit": true, 00:47:26.759 "transport_retry_count": 4, 00:47:26.759 "bdev_retry_count": 3, 00:47:26.759 "transport_ack_timeout": 0, 00:47:26.759 "ctrlr_loss_timeout_sec": 0, 00:47:26.759 "reconnect_delay_sec": 0, 00:47:26.759 "fast_io_fail_timeout_sec": 0, 00:47:26.759 "disable_auto_failback": false, 00:47:26.759 "generate_uuids": false, 00:47:26.759 "transport_tos": 0, 00:47:26.759 "nvme_error_stat": false, 00:47:26.759 "rdma_srq_size": 0, 00:47:26.759 "io_path_stat": false, 00:47:26.759 "allow_accel_sequence": false, 00:47:26.759 "rdma_max_cq_size": 0, 00:47:26.759 "rdma_cm_event_timeout_ms": 0, 00:47:26.759 "dhchap_digests": [ 00:47:26.759 "sha256", 00:47:26.759 "sha384", 00:47:26.759 "sha512" 00:47:26.759 ], 00:47:26.759 "dhchap_dhgroups": [ 00:47:26.759 "null", 00:47:26.759 "ffdhe2048", 00:47:26.759 "ffdhe3072", 00:47:26.759 "ffdhe4096", 00:47:26.759 "ffdhe6144", 00:47:26.759 "ffdhe8192" 00:47:26.759 ] 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "bdev_nvme_attach_controller", 00:47:26.759 "params": { 00:47:26.759 "name": "nvme0", 00:47:26.759 "trtype": "TCP", 00:47:26.759 "adrfam": "IPv4", 00:47:26.759 "traddr": "127.0.0.1", 00:47:26.759 "trsvcid": "4420", 00:47:26.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:26.759 "prchk_reftag": false, 00:47:26.759 "prchk_guard": false, 00:47:26.759 "ctrlr_loss_timeout_sec": 0, 00:47:26.759 "reconnect_delay_sec": 0, 00:47:26.759 "fast_io_fail_timeout_sec": 0, 00:47:26.759 "psk": "key0", 00:47:26.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:26.759 "hdgst": false, 00:47:26.759 "ddgst": false 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "bdev_nvme_set_hotplug", 00:47:26.759 "params": { 00:47:26.759 "period_us": 100000, 00:47:26.759 "enable": false 00:47:26.759 } 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "method": "bdev_wait_for_examine" 00:47:26.759 } 00:47:26.759 ] 00:47:26.759 }, 00:47:26.759 { 00:47:26.759 "subsystem": "nbd", 00:47:26.759 "config": [] 00:47:26.759 } 00:47:26.759 ] 00:47:26.759 }' 00:47:26.759 23:28:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:26.759 23:28:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:26.759 [2024-07-22 23:28:02.967888] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:47:26.759 [2024-07-22 23:28:02.968074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100173 ] 00:47:26.759 EAL: No free 2048 kB hugepages reported on node 1 00:47:27.077 [2024-07-22 23:28:03.085651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:27.077 [2024-07-22 23:28:03.195839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:27.337 [2024-07-22 23:28:03.402329] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:27.337 23:28:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:27.337 23:28:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:47:27.337 23:28:03 keyring_file -- keyring/file.sh@120 -- # jq length 00:47:27.337 23:28:03 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:47:27.337 23:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:27.903 23:28:04 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:47:27.903 23:28:04 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:47:27.903 23:28:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:27.903 23:28:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:27.903 23:28:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:27.903 23:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:27.903 23:28:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:28.471 23:28:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:28.471 23:28:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:47:28.471 23:28:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:28.471 23:28:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:28.471 23:28:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:28.471 23:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:28.471 23:28:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:29.038 23:28:05 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:47:29.038 23:28:05 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:47:29.038 23:28:05 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:47:29.038 23:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:29.605 23:28:05 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:47:29.605 23:28:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:29.605 23:28:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.g72qc6aBdx /tmp/tmp.EVGe7cJNcI 00:47:29.605 23:28:05 keyring_file -- keyring/file.sh@20 -- # killprocess 1100173 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1100173 ']' 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1100173 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1100173 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1100173' 00:47:29.605 killing process with pid 1100173 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@967 -- # kill 1100173 00:47:29.605 Received shutdown signal, test time was about 1.000000 seconds 00:47:29.605 00:47:29.605 Latency(us) 00:47:29.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:29.605 =================================================================================================================== 00:47:29.605 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:29.605 23:28:05 keyring_file -- common/autotest_common.sh@972 -- # wait 1100173 00:47:29.865 23:28:05 keyring_file -- keyring/file.sh@21 -- # killprocess 1097630 00:47:29.865 23:28:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1097630 ']' 00:47:29.865 23:28:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1097630 00:47:29.865 23:28:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:47:29.865 23:28:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:29.865 23:28:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1097630 00:47:29.865 23:28:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:29.865 23:28:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:29.865 23:28:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1097630' 00:47:29.865 killing process with pid 1097630 00:47:29.865 23:28:06 keyring_file -- common/autotest_common.sh@967 -- # kill 1097630 00:47:29.865 [2024-07-22 23:28:06.042495] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:47:29.865 23:28:06 keyring_file -- common/autotest_common.sh@972 -- # wait 1097630 00:47:30.433 00:47:30.433 real 0m26.637s 00:47:30.433 user 1m10.243s 00:47:30.433 sys 0m5.482s 00:47:30.433 23:28:06 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:30.433 23:28:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:30.433 ************************************ 00:47:30.433 END TEST keyring_file 00:47:30.433 ************************************ 00:47:30.433 23:28:06 -- common/autotest_common.sh@1142 -- # return 0 00:47:30.433 23:28:06 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:47:30.433 23:28:06 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:30.433 23:28:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:30.433 23:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:30.433 23:28:06 -- common/autotest_common.sh@10 -- # set +x 00:47:30.433 ************************************ 00:47:30.433 START TEST keyring_linux 00:47:30.433 ************************************ 00:47:30.433 23:28:06 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:30.694 * Looking for test storage... 00:47:30.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:30.694 23:28:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:30.694 23:28:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:30.694 23:28:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:30.694 23:28:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:30.694 23:28:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:30.695 23:28:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:30.695 23:28:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.695 23:28:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.695 23:28:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.695 23:28:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:30.695 23:28:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:30.695 23:28:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:30.695 23:28:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:30.695 23:28:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:30.695 23:28:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:30.695 23:28:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:30.695 23:28:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:30.695 /tmp/:spdk-test:key0 00:47:30.695 23:28:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:30.695 23:28:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:47:30.695 23:28:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:47:30.956 23:28:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:30.956 23:28:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:30.956 /tmp/:spdk-test:key1 00:47:30.956 23:28:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1100782 00:47:30.956 23:28:07 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:30.956 23:28:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1100782 00:47:30.956 23:28:07 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1100782 ']' 00:47:30.956 23:28:07 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:30.956 23:28:07 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:30.956 23:28:07 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:30.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:30.956 23:28:07 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:30.956 23:28:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:30.956 [2024-07-22 23:28:07.141639] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:47:30.956 [2024-07-22 23:28:07.141829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100782 ] 00:47:30.956 EAL: No free 2048 kB hugepages reported on node 1 00:47:31.216 [2024-07-22 23:28:07.322164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:31.216 [2024-07-22 23:28:07.497164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:31.786 23:28:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:31.786 23:28:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:47:31.786 23:28:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:31.786 23:28:07 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:31.786 23:28:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:31.786 [2024-07-22 23:28:07.963470] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:31.786 null0 00:47:31.786 [2024-07-22 23:28:07.996689] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:31.786 [2024-07-22 23:28:07.997523] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:31.786 23:28:08 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:31.786 23:28:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:31.786 923979420 00:47:31.786 23:28:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:31.786 772673471 00:47:31.786 23:28:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1100811 00:47:31.787 23:28:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:31.787 23:28:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1100811 /var/tmp/bperf.sock 00:47:31.787 23:28:08 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1100811 ']' 00:47:31.787 23:28:08 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:31.787 23:28:08 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:31.787 23:28:08 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:31.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:31.787 23:28:08 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:31.787 23:28:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:32.046 [2024-07-22 23:28:08.105518] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:47:32.046 [2024-07-22 23:28:08.105657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100811 ] 00:47:32.046 EAL: No free 2048 kB hugepages reported on node 1 00:47:32.046 [2024-07-22 23:28:08.209452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:32.046 [2024-07-22 23:28:08.317685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:32.304 23:28:08 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:32.304 23:28:08 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:47:32.304 23:28:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:32.304 23:28:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:32.870 23:28:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:32.870 23:28:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:33.436 23:28:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:33.436 23:28:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:34.004 [2024-07-22 23:28:10.169085] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:34.004 nvme0n1 00:47:34.004 23:28:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:34.004 23:28:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:34.004 23:28:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:34.004 23:28:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:34.004 23:28:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:34.004 23:28:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:34.570 23:28:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:34.570 23:28:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:34.570 23:28:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:34.570 23:28:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:34.570 23:28:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:34.570 23:28:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:34.570 23:28:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:35.136 23:28:11 keyring_linux -- keyring/linux.sh@25 -- # sn=923979420 00:47:35.136 23:28:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:35.136 23:28:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:35.136 23:28:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 923979420 == \9\2\3\9\7\9\4\2\0 ]] 00:47:35.136 23:28:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 923979420 00:47:35.136 23:28:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:35.136 23:28:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:35.395 Running I/O for 1 seconds... 00:47:36.769 00:47:36.769 Latency(us) 00:47:36.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:36.769 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:36.769 nvme0n1 : 1.02 7275.32 28.42 0.00 0.00 17425.71 5000.15 22816.24 00:47:36.769 =================================================================================================================== 00:47:36.769 Total : 7275.32 28.42 0.00 0.00 17425.71 5000.15 22816.24 00:47:36.769 0 00:47:36.769 23:28:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:36.769 23:28:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:37.027 23:28:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:37.027 23:28:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:37.027 23:28:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:37.027 23:28:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:37.027 23:28:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:37.027 23:28:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:37.593 23:28:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:37.593 23:28:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:37.593 23:28:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:37.593 23:28:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:37.593 23:28:13 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:47:37.593 23:28:13 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:37.593 23:28:13 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:47:37.593 23:28:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.593 23:28:13 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:47:37.593 23:28:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.593 23:28:13 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:37.593 23:28:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:38.160 [2024-07-22 23:28:14.221714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:38.160 [2024-07-22 23:28:14.221755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e91f10 (107): Transport endpoint is not connected 00:47:38.160 [2024-07-22 23:28:14.222743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e91f10 (9): Bad file descriptor 00:47:38.160 [2024-07-22 23:28:14.223740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:38.160 [2024-07-22 23:28:14.223768] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:38.160 [2024-07-22 23:28:14.223788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:38.160 request: 00:47:38.160 { 00:47:38.160 "name": "nvme0", 00:47:38.160 "trtype": "tcp", 00:47:38.160 "traddr": "127.0.0.1", 00:47:38.160 "adrfam": "ipv4", 00:47:38.160 "trsvcid": "4420", 00:47:38.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:38.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:38.160 "prchk_reftag": false, 00:47:38.160 "prchk_guard": false, 00:47:38.160 "hdgst": false, 00:47:38.160 "ddgst": false, 00:47:38.160 "psk": ":spdk-test:key1", 00:47:38.160 "method": "bdev_nvme_attach_controller", 00:47:38.160 "req_id": 1 00:47:38.160 } 00:47:38.160 Got JSON-RPC error response 00:47:38.160 response: 00:47:38.160 { 00:47:38.160 "code": -5, 00:47:38.160 "message": "Input/output error" 00:47:38.160 } 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@33 -- # sn=923979420 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 923979420 00:47:38.160 1 links removed 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@33 -- # sn=772673471 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 772673471 00:47:38.160 1 links removed 00:47:38.160 23:28:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1100811 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1100811 ']' 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1100811 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:38.160 23:28:14 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1100811 00:47:38.161 23:28:14 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:47:38.161 23:28:14 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:47:38.161 23:28:14 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1100811' 00:47:38.161 killing process with pid 1100811 00:47:38.161 23:28:14 keyring_linux -- common/autotest_common.sh@967 -- # kill 1100811 00:47:38.161 Received shutdown signal, test time was about 1.000000 seconds 00:47:38.161 00:47:38.161 Latency(us) 00:47:38.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:38.161 =================================================================================================================== 00:47:38.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:38.161 23:28:14 keyring_linux -- common/autotest_common.sh@972 -- # wait 1100811 00:47:38.419 23:28:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1100782 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1100782 ']' 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1100782 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1100782 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1100782' 00:47:38.419 killing process with pid 1100782 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@967 -- # kill 1100782 00:47:38.419 23:28:14 keyring_linux -- common/autotest_common.sh@972 -- # wait 1100782 00:47:38.989 00:47:38.989 real 0m8.521s 00:47:38.989 user 0m18.092s 00:47:38.989 sys 0m2.621s 00:47:38.989 23:28:15 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:38.989 23:28:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:38.989 ************************************ 00:47:38.989 END TEST keyring_linux 00:47:38.989 ************************************ 00:47:38.989 23:28:15 -- common/autotest_common.sh@1142 -- # return 0 00:47:38.989 23:28:15 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:47:38.989 23:28:15 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:47:38.989 23:28:15 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:47:38.989 23:28:15 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:47:38.989 23:28:15 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:47:38.989 23:28:15 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:47:38.989 23:28:15 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:47:38.989 23:28:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:47:38.989 23:28:15 -- common/autotest_common.sh@10 -- # set +x 00:47:38.989 23:28:15 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:47:38.989 23:28:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:47:38.989 23:28:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:47:38.989 23:28:15 -- common/autotest_common.sh@10 -- # set +x 00:47:42.282 INFO: APP EXITING 00:47:42.282 INFO: killing all VMs 00:47:42.282 INFO: killing vhost app 00:47:42.282 INFO: EXIT DONE 00:47:43.663 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:47:43.663 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:47:43.663 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:47:43.663 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:47:43.663 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:47:43.663 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:47:43.663 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:47:43.663 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:47:43.663 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:47:43.663 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:47:43.663 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:47:43.923 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:47:43.923 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:47:43.923 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:47:43.923 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:47:43.923 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:47:43.923 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:47:45.842 Cleaning 00:47:45.842 Removing: /var/run/dpdk/spdk0/config 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:45.842 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:45.842 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:45.842 Removing: /var/run/dpdk/spdk1/config 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:45.842 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:45.842 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:45.842 Removing: /var/run/dpdk/spdk1/mp_socket 00:47:45.842 Removing: /var/run/dpdk/spdk2/config 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:45.842 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:45.842 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:46.102 Removing: /var/run/dpdk/spdk3/config 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:46.102 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:46.102 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:46.102 Removing: /var/run/dpdk/spdk4/config 00:47:46.102 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:46.102 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:46.102 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:46.102 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:46.102 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:46.103 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:46.103 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:46.103 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:46.103 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:46.103 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:46.103 Removing: /dev/shm/bdev_svc_trace.1 00:47:46.103 Removing: /dev/shm/nvmf_trace.0 00:47:46.103 Removing: /dev/shm/spdk_tgt_trace.pid743025 00:47:46.103 Removing: /var/run/dpdk/spdk0 00:47:46.103 Removing: /var/run/dpdk/spdk1 00:47:46.103 Removing: /var/run/dpdk/spdk2 00:47:46.103 Removing: /var/run/dpdk/spdk3 00:47:46.103 Removing: /var/run/dpdk/spdk4 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1001689 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1001828 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1002067 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1002338 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1002349 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1003668 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1004729 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1006006 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1007621 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1008727 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1009800 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1013970 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1014308 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1015833 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1016820 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1020804 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1022780 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1026372 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1030207 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1039062 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1043614 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1043616 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1059181 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1059843 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1060258 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1060792 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1061621 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1062143 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1063070 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1063602 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1066369 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1066545 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1070312 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1070487 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1072083 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1077119 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1077127 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1080290 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1081573 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1082959 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1083699 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1085122 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1085994 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1091597 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1092343 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1092818 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1094374 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1094770 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1095093 00:47:46.103 Removing: /var/run/dpdk/spdk_pid1097630 00:47:46.363 Removing: /var/run/dpdk/spdk_pid1097764 00:47:46.363 Removing: /var/run/dpdk/spdk_pid1100173 00:47:46.363 Removing: /var/run/dpdk/spdk_pid1100782 00:47:46.363 Removing: /var/run/dpdk/spdk_pid1100811 00:47:46.363 Removing: /var/run/dpdk/spdk_pid741221 00:47:46.363 Removing: /var/run/dpdk/spdk_pid742081 00:47:46.363 Removing: /var/run/dpdk/spdk_pid743025 00:47:46.363 Removing: /var/run/dpdk/spdk_pid743591 00:47:46.363 Removing: /var/run/dpdk/spdk_pid744186 00:47:46.363 Removing: /var/run/dpdk/spdk_pid744424 00:47:46.363 Removing: /var/run/dpdk/spdk_pid745136 00:47:46.363 Removing: /var/run/dpdk/spdk_pid745153 00:47:46.363 Removing: /var/run/dpdk/spdk_pid745524 00:47:46.363 Removing: /var/run/dpdk/spdk_pid747113 00:47:46.363 Removing: /var/run/dpdk/spdk_pid748157 00:47:46.363 Removing: /var/run/dpdk/spdk_pid748484 00:47:46.363 Removing: /var/run/dpdk/spdk_pid748798 00:47:46.363 Removing: /var/run/dpdk/spdk_pid749050 00:47:46.363 Removing: /var/run/dpdk/spdk_pid749325 00:47:46.363 Removing: /var/run/dpdk/spdk_pid749483 00:47:46.363 Removing: /var/run/dpdk/spdk_pid749760 00:47:46.363 Removing: /var/run/dpdk/spdk_pid749946 00:47:46.363 Removing: /var/run/dpdk/spdk_pid750269 00:47:46.363 Removing: /var/run/dpdk/spdk_pid753418 00:47:46.363 Removing: /var/run/dpdk/spdk_pid753708 00:47:46.363 Removing: /var/run/dpdk/spdk_pid753996 00:47:46.363 Removing: /var/run/dpdk/spdk_pid754010 00:47:46.363 Removing: /var/run/dpdk/spdk_pid754582 00:47:46.363 Removing: /var/run/dpdk/spdk_pid754711 00:47:46.363 Removing: /var/run/dpdk/spdk_pid755142 00:47:46.363 Removing: /var/run/dpdk/spdk_pid755276 00:47:46.363 Removing: /var/run/dpdk/spdk_pid755577 00:47:46.363 Removing: /var/run/dpdk/spdk_pid755699 00:47:46.363 Removing: /var/run/dpdk/spdk_pid755876 00:47:46.363 Removing: /var/run/dpdk/spdk_pid755898 00:47:46.363 Removing: /var/run/dpdk/spdk_pid756517 00:47:46.363 Removing: /var/run/dpdk/spdk_pid756669 00:47:46.363 Removing: /var/run/dpdk/spdk_pid756869 00:47:46.363 Removing: /var/run/dpdk/spdk_pid757165 00:47:46.363 Removing: /var/run/dpdk/spdk_pid757309 00:47:46.363 Removing: /var/run/dpdk/spdk_pid757389 00:47:46.363 Removing: /var/run/dpdk/spdk_pid757660 00:47:46.363 Removing: /var/run/dpdk/spdk_pid757819 00:47:46.363 Removing: /var/run/dpdk/spdk_pid758092 00:47:46.363 Removing: /var/run/dpdk/spdk_pid758315 00:47:46.363 Removing: /var/run/dpdk/spdk_pid758642 00:47:46.363 Removing: /var/run/dpdk/spdk_pid758799 00:47:46.363 Removing: /var/run/dpdk/spdk_pid759073 00:47:46.363 Removing: /var/run/dpdk/spdk_pid759356 00:47:46.363 Removing: /var/run/dpdk/spdk_pid759895 00:47:46.363 Removing: /var/run/dpdk/spdk_pid760173 00:47:46.363 Removing: /var/run/dpdk/spdk_pid760342 00:47:46.363 Removing: /var/run/dpdk/spdk_pid760609 00:47:46.363 Removing: /var/run/dpdk/spdk_pid760762 00:47:46.363 Removing: /var/run/dpdk/spdk_pid761044 00:47:46.363 Removing: /var/run/dpdk/spdk_pid761203 00:47:46.363 Removing: /var/run/dpdk/spdk_pid761496 00:47:46.363 Removing: /var/run/dpdk/spdk_pid761658 00:47:46.363 Removing: /var/run/dpdk/spdk_pid761938 00:47:46.363 Removing: /var/run/dpdk/spdk_pid762101 00:47:46.363 Removing: /var/run/dpdk/spdk_pid762371 00:47:46.363 Removing: /var/run/dpdk/spdk_pid762447 00:47:46.364 Removing: /var/run/dpdk/spdk_pid762779 00:47:46.364 Removing: /var/run/dpdk/spdk_pid765264 00:47:46.364 Removing: /var/run/dpdk/spdk_pid768180 00:47:46.364 Removing: /var/run/dpdk/spdk_pid775511 00:47:46.364 Removing: /var/run/dpdk/spdk_pid775911 00:47:46.364 Removing: /var/run/dpdk/spdk_pid778576 00:47:46.364 Removing: /var/run/dpdk/spdk_pid778851 00:47:46.364 Removing: /var/run/dpdk/spdk_pid781821 00:47:46.364 Removing: /var/run/dpdk/spdk_pid786151 00:47:46.364 Removing: /var/run/dpdk/spdk_pid788975 00:47:46.364 Removing: /var/run/dpdk/spdk_pid796707 00:47:46.364 Removing: /var/run/dpdk/spdk_pid802279 00:47:46.364 Removing: /var/run/dpdk/spdk_pid803471 00:47:46.364 Removing: /var/run/dpdk/spdk_pid804071 00:47:46.364 Removing: /var/run/dpdk/spdk_pid815525 00:47:46.364 Removing: /var/run/dpdk/spdk_pid818071 00:47:46.364 Removing: /var/run/dpdk/spdk_pid872037 00:47:46.624 Removing: /var/run/dpdk/spdk_pid875334 00:47:46.624 Removing: /var/run/dpdk/spdk_pid880145 00:47:46.624 Removing: /var/run/dpdk/spdk_pid884673 00:47:46.624 Removing: /var/run/dpdk/spdk_pid884740 00:47:46.624 Removing: /var/run/dpdk/spdk_pid885333 00:47:46.624 Removing: /var/run/dpdk/spdk_pid885924 00:47:46.624 Removing: /var/run/dpdk/spdk_pid886522 00:47:46.624 Removing: /var/run/dpdk/spdk_pid886922 00:47:46.624 Removing: /var/run/dpdk/spdk_pid886973 00:47:46.624 Removing: /var/run/dpdk/spdk_pid887191 00:47:46.624 Removing: /var/run/dpdk/spdk_pid887445 00:47:46.624 Removing: /var/run/dpdk/spdk_pid887449 00:47:46.624 Removing: /var/run/dpdk/spdk_pid887985 00:47:46.624 Removing: /var/run/dpdk/spdk_pid888632 00:47:46.624 Removing: /var/run/dpdk/spdk_pid889162 00:47:46.624 Removing: /var/run/dpdk/spdk_pid889564 00:47:46.624 Removing: /var/run/dpdk/spdk_pid889686 00:47:46.624 Removing: /var/run/dpdk/spdk_pid889869 00:47:46.624 Removing: /var/run/dpdk/spdk_pid891098 00:47:46.624 Removing: /var/run/dpdk/spdk_pid891936 00:47:46.624 Removing: /var/run/dpdk/spdk_pid897139 00:47:46.624 Removing: /var/run/dpdk/spdk_pid934942 00:47:46.624 Removing: /var/run/dpdk/spdk_pid938452 00:47:46.624 Removing: /var/run/dpdk/spdk_pid939506 00:47:46.624 Removing: /var/run/dpdk/spdk_pid940815 00:47:46.624 Removing: /var/run/dpdk/spdk_pid940952 00:47:46.624 Removing: /var/run/dpdk/spdk_pid941088 00:47:46.624 Removing: /var/run/dpdk/spdk_pid941229 00:47:46.624 Removing: /var/run/dpdk/spdk_pid941802 00:47:46.624 Removing: /var/run/dpdk/spdk_pid943126 00:47:46.624 Removing: /var/run/dpdk/spdk_pid944112 00:47:46.624 Removing: /var/run/dpdk/spdk_pid944669 00:47:46.624 Removing: /var/run/dpdk/spdk_pid946278 00:47:46.624 Removing: /var/run/dpdk/spdk_pid946749 00:47:46.624 Removing: /var/run/dpdk/spdk_pid947509 00:47:46.624 Removing: /var/run/dpdk/spdk_pid950548 00:47:46.624 Removing: /var/run/dpdk/spdk_pid954004 00:47:46.624 Removing: /var/run/dpdk/spdk_pid957328 00:47:46.624 Removing: /var/run/dpdk/spdk_pid980361 00:47:46.624 Removing: /var/run/dpdk/spdk_pid982991 00:47:46.624 Removing: /var/run/dpdk/spdk_pid986780 00:47:46.624 Removing: /var/run/dpdk/spdk_pid987721 00:47:46.624 Removing: /var/run/dpdk/spdk_pid988899 00:47:46.624 Removing: /var/run/dpdk/spdk_pid991766 00:47:46.624 Removing: /var/run/dpdk/spdk_pid994152 00:47:46.624 Removing: /var/run/dpdk/spdk_pid998645 00:47:46.624 Removing: /var/run/dpdk/spdk_pid998649 00:47:46.624 Clean 00:47:46.885 23:28:22 -- common/autotest_common.sh@1451 -- # return 0 00:47:46.885 23:28:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:47:46.885 23:28:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:47:46.885 23:28:22 -- common/autotest_common.sh@10 -- # set +x 00:47:46.885 23:28:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:47:46.885 23:28:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:47:46.885 23:28:22 -- common/autotest_common.sh@10 -- # set +x 00:47:46.885 23:28:23 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:46.885 23:28:23 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:46.885 23:28:23 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:46.885 23:28:23 -- spdk/autotest.sh@391 -- # hash lcov 00:47:46.885 23:28:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:47:46.885 23:28:23 -- spdk/autotest.sh@393 -- # hostname 00:47:46.885 23:28:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:47.145 geninfo: WARNING: invalid characters removed from testname! 00:48:54.842 23:29:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:58.129 23:29:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:06.269 23:29:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:14.441 23:29:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:22.573 23:29:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:32.546 23:30:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:40.664 23:30:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:40.664 23:30:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:40.664 23:30:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:49:40.664 23:30:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:40.664 23:30:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:40.664 23:30:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.664 23:30:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.664 23:30:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.664 23:30:15 -- paths/export.sh@5 -- $ export PATH 00:49:40.664 23:30:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.664 23:30:15 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:49:40.664 23:30:15 -- common/autobuild_common.sh@447 -- $ date +%s 00:49:40.664 23:30:15 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721683815.XXXXXX 00:49:40.664 23:30:15 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721683815.TChoHq 00:49:40.664 23:30:15 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:49:40.664 23:30:15 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:49:40.664 23:30:15 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:49:40.664 23:30:15 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:49:40.664 23:30:15 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:49:40.664 23:30:15 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:49:40.664 23:30:15 -- common/autobuild_common.sh@463 -- $ get_config_params 00:49:40.664 23:30:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:49:40.664 23:30:15 -- common/autotest_common.sh@10 -- $ set +x 00:49:40.664 23:30:15 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:49:40.664 23:30:15 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:49:40.664 23:30:15 -- pm/common@17 -- $ local monitor 00:49:40.664 23:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:15 -- pm/common@21 -- $ date +%s 00:49:40.664 23:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:15 -- pm/common@21 -- $ date +%s 00:49:40.664 23:30:15 -- pm/common@25 -- $ sleep 1 00:49:40.664 23:30:15 -- pm/common@21 -- $ date +%s 00:49:40.664 23:30:15 -- pm/common@21 -- $ date +%s 00:49:40.664 23:30:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721683815 00:49:40.664 23:30:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721683815 00:49:40.664 23:30:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721683815 00:49:40.664 23:30:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721683815 00:49:40.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721683815_collect-vmstat.pm.log 00:49:40.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721683815_collect-cpu-load.pm.log 00:49:40.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721683815_collect-cpu-temp.pm.log 00:49:40.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721683815_collect-bmc-pm.bmc.pm.log 00:49:40.664 23:30:16 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:49:40.664 23:30:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:49:40.664 23:30:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:40.664 23:30:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:49:40.664 23:30:16 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:49:40.664 23:30:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:49:40.664 23:30:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:49:40.664 23:30:16 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:40.664 23:30:16 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:49:40.664 23:30:16 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:49:40.664 23:30:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:49:40.664 23:30:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:49:40.664 23:30:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:49:40.664 23:30:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:49:40.664 23:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:49:40.664 23:30:16 -- pm/common@44 -- $ pid=1114636 00:49:40.664 23:30:16 -- pm/common@50 -- $ kill -TERM 1114636 00:49:40.664 23:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:49:40.664 23:30:16 -- pm/common@44 -- $ pid=1114638 00:49:40.664 23:30:16 -- pm/common@50 -- $ kill -TERM 1114638 00:49:40.664 23:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:49:40.664 23:30:16 -- pm/common@44 -- $ pid=1114640 00:49:40.664 23:30:16 -- pm/common@50 -- $ kill -TERM 1114640 00:49:40.664 23:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:40.664 23:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:49:40.664 23:30:16 -- pm/common@44 -- $ pid=1114671 00:49:40.664 23:30:16 -- pm/common@50 -- $ sudo -E kill -TERM 1114671 00:49:40.664 + [[ -n 617769 ]] 00:49:40.664 + sudo kill 617769 00:49:40.674 [Pipeline] } 00:49:40.694 [Pipeline] // stage 00:49:40.700 [Pipeline] } 00:49:40.718 [Pipeline] // timeout 00:49:40.724 [Pipeline] } 00:49:40.739 [Pipeline] // catchError 00:49:40.746 [Pipeline] } 00:49:40.766 [Pipeline] // wrap 00:49:40.773 [Pipeline] } 00:49:40.791 [Pipeline] // catchError 00:49:40.800 [Pipeline] stage 00:49:40.801 [Pipeline] { (Epilogue) 00:49:40.814 [Pipeline] catchError 00:49:40.816 [Pipeline] { 00:49:40.834 [Pipeline] echo 00:49:40.836 Cleanup processes 00:49:40.842 [Pipeline] sh 00:49:41.129 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:41.129 1114806 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:49:41.129 1114901 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:41.142 [Pipeline] sh 00:49:41.424 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:41.424 ++ grep -v 'sudo pgrep' 00:49:41.424 ++ awk '{print $1}' 00:49:41.424 + sudo kill -9 1114806 00:49:41.437 [Pipeline] sh 00:49:41.720 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:08.338 [Pipeline] sh 00:50:08.627 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:08.886 Artifacts sizes are good 00:50:08.902 [Pipeline] archiveArtifacts 00:50:08.910 Archiving artifacts 00:50:09.402 [Pipeline] sh 00:50:09.697 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:50:09.974 [Pipeline] cleanWs 00:50:09.985 [WS-CLEANUP] Deleting project workspace... 00:50:09.985 [WS-CLEANUP] Deferred wipeout is used... 00:50:09.992 [WS-CLEANUP] done 00:50:09.994 [Pipeline] } 00:50:10.012 [Pipeline] // catchError 00:50:10.024 [Pipeline] sh 00:50:10.310 + logger -p user.info -t JENKINS-CI 00:50:10.320 [Pipeline] } 00:50:10.336 [Pipeline] // stage 00:50:10.342 [Pipeline] } 00:50:10.359 [Pipeline] // node 00:50:10.364 [Pipeline] End of Pipeline 00:50:10.398 Finished: SUCCESS